Category Archives: Philosophy

The Culture Itself May Be Unjust

If there is a reasonable alternative to an inequality that causes undue and preventable harm, regardless of whether that injustice is a result of economic or political, social or institutional systems, structures, or pressures, then the reasonable alternative should be selected to achieve a more just and fair world. The hierarchical structure of our society that as a strategy for survival, however inadequate that strategy proves to be in reality causes unjust inequalities that lead to unjust health inequalities and neither are necessary conclusions. Therefore, as a society we should seek to implement a reasonable alternative, which entails a decentralization of decision making power to distribute control of our lives more broadly because lack of control is the greatest factor contributing to the unjust inequalities in our society.

Either inequalities are just or they are unjust. Inequalities are natural phenomena, so it cannot be the case that they are inherently unjust. For example, the day is naturally warmer than the night, as is the day also brighter than the night, and both are the result of the unequal distribution of the sun shining on different parts of the planet at different times. Gorillas are stronger than chimpanzees as a result of their natural physical compositions. Men can neither give birth to a child, nor can they carry a fetus to term because they lack the necessary physical components to do so. On the other hand, women, by natural physical composition are the sex of our species that bear the burden of both carrying fetuses to term and suffering the pain of giving birth. None of these examples are inherently unjust, because there are yet no reasonable alternatives to them and as such, there is no choice available to augment the distribution of inequalities.  So, if inequalities of themselves are not unjust, then there must be other factors that commingle with inequalities, if people feel they are unjust.

Inequalities are unjust, if they are unnecessary and they are a cause of preventable harm. There is nothing moral or ethical about the day-night dichotomy described above. It is merely a description of what is. The mere fact that gorillas are physically stronger than chimpanzees, or for that matter one human as opposed to another, is also simply a description of the differences between them and as such, there is nothing immoral about the inequalities; in fact, they are amoral. The physical differences between men and women are not of themselves immoral or unjust, they are merely descriptions of what is. However, when natural difference lend themselves to alternative options, such as, who has control of if and when a woman is to carry and bear a child, then morality and justice come into effect. For instance, women in the United States were at one point considered by law as the legal property of the men they were married to, who also had claim to the woman’s reproductive capacity. Women had to fight a long and arduous battle for the right to control their own reproductive rights; i.e., for women to control the decision of if and when to elect to have a child, when to use contraception, and when to have an abortion. The unfair and biased control exercised over women’s sex difference by men was an injustice to women. Since men are not the ones who have to either suffer the pain of carrying a fetus to term or to suffer the pain of birth, and furthermore, since men have no claim to a woman’s body because it is not theirs, men have no right or justification to impose upon any woman that she must bear these burdens against her will if she elects not to suffer them.  The redistribution of decision making authority from men to women over their own bodies was a just redistribution of control.

It is not the existence of physical differences e that makes the circumstances unjust, similar to the fact that natural inequalities are not inherently unjust, but rather, that when as a result of social interventions that exploit those differences and lead to unfair situations wherein harm occurs is what identifies situations as unjust. Therefore, because that which is unjust results from social interactions wherein there are reasonable alternatives that do not lead to harm or lead to less harm, we should obligate the actions and decisions that limit harm, and hold responsible those who violate those obligations and cause harm, while seeking as a positive duty to limit the unfair and unjust harms that occur.

Sex however, is not the only pertinent social factor that leads to unjust inequalities; class and social status are also relevant social considerations that lead to unjust outcomes and situations. Another example of unjust inequalities is one that results from the social hierarchical structure of our society as one of the consequences of the economic system, was revealed by the Whitehall Studies conducted in London.[1]  Michael Marmot, the author of Social Causes of Inequalities in Health, who analyzing the longitudinal Whitehall Studies identified that a person’s belief of a lack of control over their environment was one of the leading factors to diminished health. Marmot found that there is a gradient of mortality when the society is based upon a hierarchical structure of organization wherein each lower stratum has a higher mortality and disease rate than the stratum above it. [2] The Whitehall Study tracked men in white collar positions, none of whom were impoverished and all who were gainfully employed, and this is where the pattern was identified. The pattern was also consistent for the control of one’s living conditions and was exacerbated by economic constraints such as poverty, which reveals that social class; i.e., the social stratum of an entire group of people is vulnerable to this pattern.  After analyzing trends of the identified pattern and how it shifts over time, Marmot correlated these shifts with governmental policy and suggests that: “[i]f it can vary, presumably as the unintended consequence of government policies and other trends, it should be possible to vary it as an intended consequence.”[3] This reveals that the health inequalities observed in the Whitehall Studies are not necessarily inevitable, and because they are not necessarily inevitable that means there may be reasonable alternatives to the socially caused factors for the disparities and as such could be unjust.

It could be argued that the data is wrong, or that there are not reasonable alternatives to select from.  Marmot however, is not the only one who has identified class differences as a relevant factor of health disparities and inequalities, Norman Daniels, has done so as well. Daniels, in the book, A Theory of Justice, in the chapter, “Three Questions of Justice,” identified that class was a greater determinant of health status than race.[4] Given that there are reasonable alternatives to the manner in which health is distributed along economic lines, which is exacerbated by racial and gender factors, Daniels proposes this theory of justice: [5]

Failing to promote health in a population, that is, failing to promote normal functioning in it, fails to protect the opportunity of capability of people to function as free and equal citizens. Failing to protect that opportunity or capability when we could reasonably do otherwise…is a failure to provide us with what we owe each other. It is unjust.

One of the major issues in the manner in which health care and health in general is distributed across and throughout a society is that access tends to be delineated by economic capacity to purchase; that is, spending power. The problem with this as Daniels asserts is that it causes us to “treat health care as a commodity,” as something that is not of “special importance” to society, but that is not the reality.[6] However, Daniels observes that as a society goods, such as jobs and education, are distributed “very unequally across subgroups that differ by race, ethnicity, gender, or class.”[7] One’s position or stratum in the hierarchical structure directly correlates with one’s ability to control one’s environment and the circumstances of the conditions of their environment because success in the economic structure of the market is dependent upon one’s ability to purchase. This market structure however, fails Daniels’ theory of justice because the economic bar to access limits the opportunity for people to function as free and equal citizens.

Margaret Whitehead has also observed health disparities that directly relate to the social stratum people belong to. In Whitehead’s article, The Concepts and Principles of Equity and Health, it is noted that “there is consistent evidence that disadvantaged groups have poorer survival chances, dying at a younger age than more favoured groups.”[8] One of the reasons for this difference that Whitehead identifies is that there are inequalities in access and quality of health services and that “those most in need of medical care, including preventive care, are least likely to receive a high standard of service.”[9] Whitehead lists seven “differentials” that will help to clarify whether inequalities are unnecessary and unfair, or simply are inequalities:[10]

(1) Natural, biological variation.

(2) Health-damaging behavior if freely chosen, such as participation in certain sports and

pastimes.

(3) The transient health advantage of one group over another when the group is first to adopt a

health-promoting behaviour (as long as other groups have the means to catch up fairly soon).

(4) Health-damaging behavior where the degree of choice of lifestyles is severely restricted.

(5) Exposure to unhealthy, stressful living and working conditions.

(6) Inadequate access to essential health and other public services.

(7) Natural selection or health-related social mobility involving the tendency for sick people to

move down the social scale.

The first three Whitehead suggests are simply inequalities or are acceptable and I would agree as it is similar to what I have argued above. However, the last four differentials all share relevance to the type of unjust inequalities that can be distinguished among the social strata of the hierarchical structure of society.  In particular to Marmot’s discussion is (5), “exposure to unhealthy, stressful living and working conditions” because it pertains to the lack of control one has the capacity to express over their environment that leads to health inequalities.

It does not appear as though the data is incorrect since similar data has been identified by multiple sources and they draw very similar conclusions, so the remaining objections to the inequalities being unjust will fall upon the reasonableness of the alternatives. To deny that there are alternative social structures is to deny the reality of the world in which we live because not all societies have such stark hierarchical structures.  In addition to that, it is possible to craft social and economic policies that will have the effect of leveling-up the least-well-off, the lower strata, and to raise their standard of living and personal control of their environments to a more equitable distribution.  It would further be possible to augment the capitalist structure of the political system so that more participation from a broader spectrum of the population would result in a greater sense of control of their lives. A system of collective ownership with collective bargaining could be instituted for how companies organize themselves, thus providing people with more control over their working environments.  In fact, Whitehead recommends “decentralizing power and decision making” as one of the core actions to be taken to mitigate unjust inequalities within society.[11]

If it is argued that these recommendations are unfair because they suggest a shift in culture and that it is not right to seek to change culture, then the most obvious response is that culture, by definition, is a social strategy for survival. Because culture is a strategy that means it is an institution, a human creation and as such was not inevitable, but rather, something that can both grow and change. It further means, that because it can grow and change that there are potential alternatives as have just be evinced, and that because it is social it is the factor earlier identified that if harm results, is the factor responsible for the injustice. Therefore, if the culture is unjust and there is a reasonable alternative, and there is, then there is also an obligation to strive toward that alternative in order to limit the harms resulting from the inequalities inherent in the current culture.

The goal is not to create a completely egalitarian society or to rid the world of all inequalities, but rather to seek a more just society for all members.  In regard to this Whitehead wrote: “[w]e will never be able to achieve a situation where everyone in the population has the same level of health, suffers the same type and degree of illness and dies after exactly the same life span. This is not an achievable goal, nor even a desirable one.”[12] It is however the goal, to respect the humanity and the dignity of each and every human being, to honor the agency and the autonomy of every person, and to accept that we all need to feel as though we have control over our own lives. The reality is that we all have a shared interest in seeking to achieve the greatest possible aggregate health because that is something that is necessary for us all to flourish, which is what I believe the true definition of justice is. Conversely, that which intervenes in the best possible, or the greatest potential for the flourishment of all people is unjust if there is a reasonable alternative to select. Many of the inequalities that exist today, when measured by the differentials proposed by Margaret Whitehead reveal them to be unjust. Thus, we as a society should seek to limit their impacts by reducing the impacts of the hierarchical structure of our society.

[1] Marmot, Michael “Social Causes of Inequality in Health.” In Public Health Ethics and Equity, edited by Sudhir Anand, Fabienne Peter, and Amartya Sen, 37-61. (New York: Oxford University Press, 2004), 38.

[2] Ibid.

[3] Marmot, 41

[4] Daniels, Norman. A Theory of Justice. (New York: Cambridge University Press, 2008), 14.

[5] Ibid.

[6] Daniels, 20.

[7] Daniels, 13.

[8] Whitehead, Margaret, “The Concepts and Principles of Equity and Health,” Health Promotional International vol 6 (1991), 218.

[9] Ibid.

[10] Whitehead, 219.

[11] Whitehead, 223.

[12] Whitehead, 219.

Advertisements

An Obligation to Immunize

It is just to limit the harms to our society’s public health, only insofar as we are members of a moral community—the group of people to whom duties and obligations are owed to and should be expected from—because public health is a public good and all members either benefit or are harmed by what we permit. Immunization requirements are an infringement upon an individual’s liberty to determine their own involvement or the involvement of their children. However, immunization is also a proven method of protecting the public health and of limiting the harm to a moral community. Therefore, notwithstanding its parentalistic nature, imposed inoculation is a justifiable and defensible means of providing for the public health and protecting our public good.

All morality, moral principles and moral precepts at some point reduce to intuitions; i.e., to feelings about what is right and wrong, good and bad, just and unjust. The most fundamental of all intuitions, and the most necessary condition for morality or for the moral community to exist is a right to life; i.e., a right to live. The right to life is a prima facie claim based upon an a priori reasoning: “I think…I am.”[1] Because I am, I must have a right to be, otherwise I would not be. Therefore, until proven otherwise it has been assumed that there is a right to life. It is from this basis that all other rights, duties, obligations, protections and theories of justice emerge to secure a particular quality of life. Furthermore, without life none of these other rights make any sense and are inconsistent because there can be no right to liberty, which is to live one’s life uninhibited, if there is no right to live. Thus, the right to life is the most fundamental and foundational principle of morality and about which there is little debate.

By corollary, if there is a right to life, then there must a right to all the things that are necessary for life. There is a right to life. Therefore there is a right to all the things that are necessary for life. So, each person has a right to water, air, food, education, security, safety, health, and whatever else is contained under the penumbra that is necessary for the life of a person.

Quality of life on the other hand is much more heavily debated and difficult to assert in terms of the positive duties of others. If there is a threshold to the quality of a person’s life, below which is unacceptable and is repugnant to the moral community, then the moral community has a positive duty to ensure that no member of their community falls or remains below that threshold. Disregarding or disavowing members is not an option because it does not absolve one’s responsibility them. Drawing a principled threshold however is not easy. For example, it must be delineated whether all members of the moral community should have the opportunity to live to at least twenty-five years of age, or to live one hundred and twenty-five years of age or to some point in between. Determining the level of pain a person must be in to trigger help from the moral community; whether the hint of pain is sufficient or if there needs to be more chronic or life-threatening pain. Identifying the types and severities of illnesses or diseases that are permissible in terms of quality of life; whether the flu is enough to trigger obligated assistance or is something like the Zika virus that has a higher rate of fatality required. Defining where upon the spectrum of wealth and poverty the threshold is breeched and a person’s quality of life is left wanting. Food is one of the rights entailed under the penumbra of the right to life and thus each person should be guaranteed a daily caloric intake that is sufficient to provide them the means to have a productive life, but that does not also guarantee that it must always be food that is to their liking.

It is clear that at some point requiring further positive duties from the moral community becomes over-demanding and the redistribution of time and resources to those who are less-well-off becomes harmful and even counterproductive, but exactly where that point lies is not entirely clear. A sufficient threshold could be established by resorting the “Original Position behind the Veil of Ignorance,” proposed by John Rawls.[2] If all the members of the moral community were to enter into a debate about what is a just distribution of health and resources in such a way that they were ignorant to their positions or roles in the society after they left the decision making table, it is supposed that they would agree on a threshold and correlating positive duties that would be the most fair to the members who will be least-well-off because no decision maker knows if that will be their position. Rawls argues that as a result of their ignorance in this hypothetical situation, the decision makers will make decisions in their own self-interest assuming their position as potentially the worst, and seek to achieve the greatest quality of life possible. This is the threshold of a sufficient quality of life that should be the standard for determining positive duties within the moral community and for triggering the actions to assist those who have fallen below the threshold.

It could be argued from a utilitarian standpoint that the best outcome or best consequence might not be to improve the conditions of the least-well-off, that it may cause more harm by leveling down some of the best-well-off in society and that redistributive justice is unjustified on these grounds. In response, it could be argued though not easily, that if the net happiness that would result either from leveling down while leveling up, or just increasing the happiness of the best-well-off is the same, then there is no justification for either on utilitarian grounds, and a principle of justice, like the original position, could be implemented to determine the best course of action. In addition, all things being equal, there is also something that feels wrong about not limiting the pain and suffering of members of our community, when there is a reasonable alternative and no good justification for not doing so.

In addition to the positive duty to mitigate the factors and conditions of those who fall below the quality of life threshold, there is entailed in this also a negative duty to not impose such harms upon any members of the moral community that would force them below the threshold. This particular negative duty does not supersede other duties to not harm per se, so long as the two do not conflict, and if they should then the duty that mitigates the most harm has authority, so long as the threshold is not breeched. Yet as with the positive duties, there must also be a limit to the negative duties because if there is no limit then there is a potential for every action to be shown to be harm too great to bear. The “Harm Principle” proposed by John Stuart Mill, which states that “the sole end for which mankind are warranted, individually or collectively in interfering with the liberty of action of any of their number, is self-protection. That the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will is to prevent harm”  is a good starting point.[3] First, there has to be some form of justification for when power can rightfully be exercised when there is a breach of duty, which the Harm Principle expresses. Second, as was expressed above in the section on quality of life, self-interest is one of the primary motivators for defining the just threshold of a sufficient quality of life within a moral community. When quality of life and self-protection are conceived of in these terms it reveals that both reach beyond the individual and actually depend upon the well-being, or the health aggregate of the community as a whole. This is what binds the moral community together to form the necessary interdependence to manage collective action problems.

A public good is something that all people within a society benefit from regardless of whether a person chooses to participate in the use thereof. It is not obligatory that any person participate in the use of a public good; any person at their liberty may opt-out of using it. A public good is not only something that is made by humans such as a park or road, but it may also be something like the air (oxygen) or water; things humans and other creatures need and share to survive, or that is sufficient for enjoyment or fulfillment of life. A public good however, only remains a public good so long as it continues to provide a benefit to the society. Water is necessary for the survival of humans and thus for the moral community. If a public good such as the water humans share becomes polluted or otherwise unusable, then it will cease to provide the necessary benefit to human life. If roads are not kept clear of debris or are otherwise unusable, then they will cease to provide their benefit to human mobility, which is essential in this society for human life. Thus, the public has a negative duty to not harm the public good, and a positive duty to protect and to ensure the sustainability of the public good both for his or herself and for the rest of the members of their society. As a result of who is responsible for the maintenance public goods, such as, maintaining clean and healthy potable water, clean air, nutritious food, and the general aggregate health of a society these are collective action problems.

The aggregate level of health of a society is a benefit to the public, regardless of whether any member of the public elects to participate in the use thereof. When the aggregate level of health of a society is threatened or harmed it ceases to be a benefit to the public. Thus, each individual has the duty to protect all public goods and to ensure that it remains beneficial regardless of whether they opt-in or opt-out of using it. Therefore, Public Health is a public good and as such the public has a positive duty to protect it and a negative duty to not harm it. This is further justified on the grounds that regardless of whether a person wants to benefit from or to contribute to the general health of their society, i.e., eating healthy, exercising, practicing safe sex, or participating in vaccination programs they do in fact benefit from its existence. Thus, if a person does not contribute to the general health of their society and yet benefits from it, then they are Free Riding on the contributions of others, which gives them an unfair advantage and also poses a harm to our moral community. If enough people opt-out of contributing to the general health of the public then the members of the society will be placed in jeopardy of breaching the just threshold established by the original position, thus breaching their duty to not cause harm to others. Furthermore, it will undermine the integrity of the interdependence of the moral community. To overcome the tendency to free ride and to meet the positive duty to protect the public good, the moral community is justified in creating an institution with authority to impose proven methods to maintain the just threshold upon the general public. This will also work to limit the over demandingness of positive duties upon individuals because it will remove from them the liberty to opt-out. This is all justified applying the interpretation of the Harm Principle stated earlier.

Any activity the general public is obligated by law to participate in the state has a duty to make that thing as safe as possible for the public to do without reasonable risk of harm. The Public School System is something that the general public is obligated by the law to participate in. Therefore, the state, which is the public, has a duty to make the Public Education System as safe as possible for the general public to participate in. Since, most children go to school, many of who attend public school, and start school at a very young age and usually prior to their being considered morally culpable and responsible or capable of responding to the positive and negative duties adults are bound by; the institution of school is perhaps the best institution to manage and track inoculations for infectious diseases of the general public. Making vaccinations obligatory for children as they enter into the public education system will provide the double benefit of creating a healthier environment for the students and increasing the level of health for the entire population.

There will of course be objections to this on both philosophical and religious grounds. However, it is difficult to see what argument a libertarian would make in opposition since this course of action is justified under the principle of liberty. One utilitarian argument has already been raised and rejected earlier in this paper and since this course of action is for the common good and is about good consequences, it does not seem plausible that a successful utilitarian objection can be made to oppose it. There may however be some ground to be made by a deontological argument, wherein the children in the public school system are argued to be being used as mere means to an end, i.e., that their humanity and their own personal goals are not being respected. There are two ways of overcoming this objection (1) the practice of inoculating the children is not only for the benefit of the public, but also the children themselves; (2) there are other options for the education of children such as private school and homeschooling. An argument could be made in regard to class discrimination because parents who are poor might not be able to afford one of the alternatives to public education, given that they desired the alternative because they had a sincere objection to their children being inoculated. This is perhaps one of the strongest objections because it is based on the theory of justice and there also does not seem to be an easy means to remedy the concern. Many of the other objections will most likely be made upon religious grounds and these also do not seem easy to overcome because of how connected religious ideals are to people’s identities. The remaining objections are most likely to be made on some form of scientific grounds, which question the reliability of the vaccinations that are currently available to the public.

It seems possible that since most of the objections have been overcome and that potentially a vast majority of the population will either be inoculated through the public school system or other means, that so long as a sufficient amount of the population is inoculated against the infectious diseases that cause the most problems and are the easiest to remedy given current technologies and costs, that so long as the just threshold can be maintained it is permissible to omit these sincerely conscientious objectors from inoculation. It is clear that inherent in the course of action recommended in this paper that there is a certain amount of parentalism (a gender neutral term for paternalism), which often arouses contempt. However, it is not clear that whether this level of parentalism, or this manner of parantalism is unjust or inherently bad or wrong. This seems to be especially the case if the Harm Principle as interpreted earlier is accepted wherein the only permissible reason to intervene in an individual’s liberty is to limit harm. Inoculation is a proven and effective method of limiting harm. Thus, by intervening in the liberty of individuals and imposing inoculation upon the population harms are being limited. Therefore, requiring that children be immunized as they enter into the Public Education System is justifiable and lacking a sufficient reasonable alternative can be viewed as obligatory.

[1] Rene Descartes, Meditations on First Philosophy (Second Meditation).

[2] Rawls, John, “A Theory of Justice” http://www.csus.edu/indiv/c/chalmersk/ECON184SP09/JohnRawls.pdf

[3] John Stuart Mill, “Introductory,” in On Liberty (1859), 18.

Guilty: Regardless of Whether You Knew It Was Wrong

At the heart of morality lies the responsibility a person has to commit or omit a particular action, which is usually defined as either right or wrong, respectively. If the person elects the right action then the action tends to be morally praiseworthy. Conversely, if the person elects the wrong action then the action tends to be morally blameworthy, and the person responsible could be subject to some form of punishment. But is it possible for an individual to both commit a wrongful act and not also be responsible for the commission of the act, and if so, under what circumstances is this possible?  For example, if all actions are determined by causes and essentially denies the existence of free-will, is the person still morally culpable for her actions? Or if the moral parameters of a particular culture are such that an immoral act is not conceived as such, does that excuse a person of his moral responsibility? Michele M. Moody-Adams considers the complications of moral responsibility across both space and time and draws the conclusion that neither absolves moral culpability[1] I believe that in regard to particular events there can be extenuating circumstances, which may potentially absolve a person of moral responsibility. However, in the absence of these extenuating circumstances, there are some things that a person can and should be held morally responsible for, regardless of whether they knew it was right or wrong at the time.

There can be no responsibility, if there is no power of choice to choose an alternative action. In other words, if a person cannot choose to do otherwise, then they cannot be held responsible for the only thing that they could have done. Moral responsibility presupposes free-will, and free-will presupposes the capability of choice. Yet, free-will is more complicated than the actual act of choice, because although a woman may will something to be, that does not mean she is capable of making it come to be. For example, she may will that she not get into a car-accident, and may even make the active choice to drive cautiously so as not to get into an accident, but beyond her will and her choice she is still involved in a collision. This however, does not absolve her free-will, because she definitely willed there not to be a collision. What is important and at the heart of the existential question, is does she have the capability of choice or is her action constrained by causes? If the former is true, then she may be morally responsible the collision, but if the latter is true, then she cannot be morally responsible for the collision.

In the discussion of determinism and free-will, as P.F Strawson[2] accurately notes in the article Freedom and Resentment, lies a metaphysical problem, i.e., whether free-will does in fact exist. On the one hand, he conceives of “optimists” as being those who believe determinism is at least not false. On the other hand, he conceives of “skeptics” as being those who believe that if determinism is true, then people cannot be held morally responsible. Strawson suggests that an optimist will promote the “efficacy of the practices of punishment,” while a skeptic will argue; “just punishment and moral condemnation imply moral guilt and guilt implies moral responsibility and moral responsibility implies freedom and freedom implies the falsity of determinism.”[3] However, what we have here are a series of implications, and arguments, but nothing definitive about the existence of free-will. The distinction that Strawson will draw is that we as people feel differently depending both upon the relationship we share with other human beings and the intentions behind the actions that affect us; or what he calls “reactive attitudes”.[4] In other words, what Strawson argues is that people hold others morally responsible for their intentions, given that they are capable of forming intentions, not their actions specifically.

Thus far the discussion has been focused on the capability of choice, but it cannot go without note that there may be constraints upon that capability which supersede the metaphysical argument. As was just shown, there are definitely differences of opinion about the existence of determinism, and it is obvious that in a line of stacked dominoes that one domino does not have a choice to push the next after the process has begun, but it is not altogether clear whether people are bound by the same constraints because of the emotional capacities we possess. Thus, without a definitive resolution to the metaphysical problem, and for the sake of argument, it will be supposed that both determinism and free-will coexist. Furthermore, it is clear that if a person is pushed by a sufficiently strong force that the person will be physically moved, but it is not clear that the person’s response to being moved is determined. For example, if the force that moves a man is another man, it is fully reasonable to suppose that the man who is the object of the push may respond with either, contempt or approval depending upon the situation and the circumstances. A major component of how that situation and those circumstances are interpreted has much to do with the socialization that the man who is the object of the push receives, and this is heavily dependent upon the culture in which the man is part of.

Moral responsibility is not an easy question to answer because it either, may presuppose that there are moral facts that universally apply across both space and time, or it may presuppose some form of moral relativism. Regarding the former assertion, not only does this present a conflict within one culture between the different moral theories of right and wrong, but it also encounters the further complication of potentially praising or blaming people for what they may not been capable of distinguishing the moral value of. In regard to the latter assertion, the issue with moral relativism is that it then becomes nearly impossible to hold any person accountable for their actions because morality becomes relative to the individual and either, all actions can be conceived of as wrong, or no actions can be conceived of as wrong. It all depends upon the individual and their own personal conception of right and wrong, which all but drains morality of its objective and non-personal components. Now it could be the case that the reason people believe there is entailed with morality and objective reality is because of the shared moral relativistic values, but that is not the general intuition regarding morality. There are some things, like murder, which is the unjustified killing of another person, that people intuitively feel to be wrong regardless of whether it happens to their person, someone their share a special relationship with, or a stranger with whom no special bonds exists. Therefore, for the sake of this argument, moral relativism can be rejected, which leaves us with moral facts.

This however, does not absolve us of problems, because we now have to determine whether moral facts can be applied across both space and time, and for this we will return to Moody-Adams’ argument in Culture, Responsibility, and Affected Ignorance. In the beginning of her argument, it is suggested that there is “a crucial connection between culture and agency,”[5] which means that the capability of choice is dependent on culture. The argument tracks what Moody-Adams calls “moral ignorance” and she argues that cultural limitations can be the cause of this ignorance.[6] This moral ignorance may either take place within one’s own culture, which may have an inability to critically analyze its own practices, or between cultures there may be a bar to understanding each other’s practices. Moody-Adams rejects that either of these conditions absolves a person of responsibility, regardless of space and time.

The first major point that Moody-Adams makes to support the claim that moral responsibility applies across both space and time is the rejection of what she termed the “inability thesis”.[7] The thesis suggests that a person’s culture can potentially render them “unable to know that certain actions are wrong” because it inhibits the ability of the person to critically analyze their culture and practices.[8] In rejecting the inability thesis, Moody-Adams asserts that it is not so much that their culture has imposed upon them a “blindness” of sorts, but rather, that the actor is unwilling to consider the wrongfulness of their practices.[9] In other words, regardless of the person’s culture, they are capable of the choice to consider the rightness or wrongness of an action. Now, this would seem to be a leap in logic, or at least a presumption, except that the assertion is based on the concept of the transmission of culture. The relevant characteristics of culture to this argument are the “normative expectations about emotion, thought, and action,” that become social and legal rules, and are supported by the “nonlegal sources” of the group or society.[10] This support tends to come from those who desire to “protect the life of the group” and who internalize these rules, but also who accept the demands of the culture and are capable of criticizing their own conformity with the rules. So therefore, they are not unable, but rather, choose “not to know what one can and should know,” which is what Moody-Adams calls “affected ignorance.”[11]

The second major point Moody-Adams makes concerns “affected ignorance and the banality of wrongdoing,” i.e., the common occurrence of actively choosing not to know what one could and should know and continuing to do wrong. Moody-Adams argues that affected ignorance takes several forms, but highlights four forms that are particularly relevant: (1) “linguistic deceptions,” or codes used to conceal the truth of the wrongfulness of an action from even ourselves; (2) “the wish to ‘know nothing,'” of how wrong the means were to achieve a particular end so as to avoid responsibility; (3) “ask no questions,”  to avoid the responsibility of either stopping or preventing a wrong from occurring; and (4) “to avoid knowledge of our human fallibility,” the failure to acknowledge that our “deeply held convictions may be wrong.”[12] These four forms of affected ignorance are methods in which people use to express and display an unwillingness to take responsibility for their actions, or to consider alternatives. Moody-Adams further argues that these four forms are outgrowths of the “banality of wrongdoing,” that is denied for two principle reasons, an unwillingness to conceive of our “cultural predecessors” as having “perpetuated a practice embodying culpable moral ignorance,” and the common and philosophical perception that there are “only two responses to behavior we may want to condemn.”[13] The first perception is what she calls “a rigorously moralistic model” that blames without forgiveness and the second perception, is the “therapeutic model” that forgives without attributing blame.[14] Moody-Adams is not satisfied with these two perceptions though, and offers a third, the “forgiving moralist’s model,” that connects the banality of wrongdoing with affected ignorance (in its many forms), that acknowledges; “the serious effort required to adopt an appropriately critical stance toward potentially problematic cultural assumptions,” the first perception lacks.[15] This third model permits us to hold people morally responsible across space and time because while it acknowledges the cultural constraints upon an individual, it also acknowledges the agency or capability of an individual to analyze critically the practices of their culture for his or herself.

So far, the argument has been mostly about asserting the capability of a person to choose to analyze critically their cultural practices, thus attributing moral responsibility to people across both space and time, none of which I believe Strawson would disagree with. However, Moody-Adams’ next point focuses on insanity and how it relates to moral responsibility, which I think Strawson might find contentious. The reason that insanity becomes a point of contention in these arguments about moral responsibility is because it directly conflicts with the assertion that all people have the capability to know what they can and should know, and to think critically about their cultural practices. As mentioned above, Strawson asserts that “reactive attitudes,” or the responses that people have to the actions of others are dependent upon the intentions of the initiators of the action. If for instance, the person is either, insane, or incapable of critical analysis or being aware of the normative cultural expectations of emotion, thought, and action then they cannot, or should not, be held morally responsible. Furthermore, that most people would generally not hold them responsible. An example that should flesh this concept out is that a child who is not traditionally considered to be morally culpable yet, say under four years of age, hits their parent in the eye irreversibly damaging it. The reactive attitude, and thus the attribution of moral responsibility would be much different if the child of the parent was twenty years of age when this happened, given that they were not insane at the time of the incident. Whereas the four year old would most likely not have his intentions scrutinized, the twenty year old most likely would. This is the distinction that Strawson draws in his argument about insanity and attributes to those who are considered insane the same level of excusableness as a young child.

Moody-Adams on the other hand, while admitting that it is possible for a person to be insane, this attribution should not be applied to a person simply because they are a member of a subculture that appears to posses different normative expectations. In fact, she argues that to do this either, to a subculture, or to another culture all-together, whether across space or time, or both, is to deny that the person has their humanity and their agency; and is a “misguided cultural relativism,” of sorts.[16] Furthermore, it is to deny that all persons are capable of critical analysis, which has already been shown to be inaccurate. Thus, to bring the argument full circle, it is possible for an individual to both commit a wrongful act and to also not be responsible for the commission of the act, if an only if, the individual is either insane or incapable of critical analysis; and his is true regardless of space or time. However, this principle holds only insofar as the supposition that determinism and free-will coexist together holds, because this entire argument is founded on the individual being capable to choose, given that some things that exist are determined.

[1] Moody-Adams, Michaele M. “Culture, Responsibility, and Affected Ignorance.” Ethics, Vol. 104. No. 2 (January 1994) pp. 291-309.

[2] Strawson, P.F. Freedom and Resentment (1962)

[3] Strawson, p. 72

[4] Strawson, p. 80

[5] Moody-Adams, p. 291

[6] Moody-Adams, p. 292

[7] Moody-Adams, p. 293

[8] Moody-Adams, p. 293-294

[9] Moody-Adams, p. 294

[10] Moody-Adams, p. 295

[11] Moody-Adams, p. 296

[12] Moody-Adams, p. 301

[13] Moody-Adams, p. 302

[14] Moody-Adams, p. 303

[15] Moody-Adams, p. 303

[16] Moody-Adams, p. 308

Apathy and Responsibility: The American Response to the Holocaust

There were millions of people dying unjustly at the hands of the Nazi regime in the end of the 1930s and the beginning of the 1940s and the American population and government, for all intents and purposes, were permitting this atrocity, or at least allowing it to happen. The pseudo history that is presented in the United States today about Americans being the “heroes” of World War II is only part of the story. What is usually not entailed in these Hollywood retellings is how many Americans denied the truth and urgency of what later became known as the Holocaust, which in its general form means total destruction. Furthermore, the utter lack of acknowledging that there were pro-Nazi and anti-Semitic organizations active on American soil during the 1930s that were engaging in propaganda campaigns, protest, and violence is slanted to paint the U.S. as more responsive, at best. History tells a different story. The moral burden of the people in the 30s and 40s was paramount because the unprecedented liquidation of an entire ethnic group was occurring and responsibility was both unclaimed and undetermined. There were arguments on all sides, but while the American government officials were engaging in arguments about what to do, if anything, the Jewish population was being exterminated in Europe. On the one hand, people were screaming for justice and for help, screaming for anything other than America’s complicity in Hitler’s plight. On the other hand, there were the skeptics, non-believers, and or non-confrontationists who indicted the screamers as being war mongers, as liars, as being unprepared for the true tasks ahead, and “quietly and gently” calling for America and the people to wait.[i]  And at the heart of the argument was the question of responsibility, because it is on that conception that acceptance or denial to act hinged.

It is perhaps not difficult to understand and conceive that many people in the 30s and 40s felt a sense of urgency to help and alleviate the suffering of the millions of Jewish people, Jewish-sympathizers and dissenters from Nazi rule in Europe. It is probably more difficult to conceive of people lacking a sense of urgency, who either, believed the reports coming out of Europe were fabrications, or were devoid of any sense of responsibility to their fellow humans. Fred Eastman was of the latter sort and in 1944 having sufficient knowledge of the situation in Nazi occupied Europe, he wrote a cold and calculated critique of the people with a sense of urgency, titled, “A Reply to Screamers.”[ii]

The document written by Eastman is a response to an author named Arthur Koestler, who was a novelist that wove into his narratives some of the tragic tales he had experienced in Europe. Eastman admitted in his response that the “reports of the mass murders of Jews and countless others are too well authenticated to be denied,” but yet lacks any motivation to join the screamers because he believes it is after the war that that real effort will begin; “the long-term task of building peace.”[iii] He thinks the screamers are responding emotively in eruptions or fits, but does not provide any reason not to have an emotional response to what he termed “no blacker crime,” and that is why he comes across as cold. For example, Eastman draws upon the Biblical parable of the Good Samaritan, which is a story that is supposed to express one’s duty to help those in need, and he could have chosen any example or explanation to follow it, but he chooses to quote a girl so young she cannot form the ‘th’ sound and whom has, as he calls it, an “emotional regurgitation,” instead of the correct moral response, which would have been a desire to help.[iv] The implicit analogy Eastman is making with this young girl is that the screamers are uneducated and immature children who do not understand morality or duty to others, and are in need of guidance. It is this unfeeling and unsympathetic, matter-of-fact, disregard for human connection and the bonds that actually motivate duty to others, that makes Eastman’s response to Koestler so cold. In addition, Eastman believed himself to be a typical representative of the American population who was in opposition of the war and the efforts to help the Jewish people in Europe.[v]

In stark contrast to Eastman, Freda Kirchwey, wrote an article in 1943 titled, “While the Jews Die,”[vi] blaming the United States and the United Nations for their complicity and failure to do their duty to help those in need.  After opening the article with an enumeration of the Nazis’ program of extermination, Kirchwey, straightforwardly identifies the blameworthy by stating; “In this country, you and I and the President and the Congress and the State Department are accessories to the crime and share Hitler’s guilt.”[vii] The “you” is a general you and given the context of the sentence it is found in, it seems most appropriate to assume the audience and recipient of the condemnation is the American people as a whole. Thus, Kirchwey lays blame flatly on both the citizens and government of the United States for their skepticism, apathy, complicity and “share” in the oppression and extermination of the Jews in Europe. Whereas Eastman believes the correct moral response is to wait, Kirchwey believes the Americans have already waited too long and the correct moral response is to act now to help the Jewish people. Kirchwey’s article was written a nearly a year prior to Eastman’s response, when more proof had been compiled, but the evidence was not enough to motivate many Americans to accept the burden of duty, with a sense of urgency to help those in need.

It is too easy of an analysis to suppose that it was anti-Semitic sentiment and prejudice that motivated the apathy of the American people although, this was certainly a factor for many people’s judgment, the reality of the reasons for the lack of urgency are more nuanced than that. There was a lack of faith in the credibility of the reports, but also questions about the motivations of the people making the reports or screaming for action, and a belief that a conflict of this magnitude was inevitable. Eastman argues that the conflict with Hitler and the Nazi regime was a “mighty conflict…over [different] philosophies of life,” that was destined to occur.[viii] Behind Eastman’s belief in this conflict rested a nest of religious and political conflicts about the origination and fruition of rights; God-given rights that lead to democracy and state-granted rights that lead to tyranny by a “master race.”[ix] This fatalistic perspective of the war with the Nazis and the extermination of the Jewish people in Europe omits autonomy, free-will, and choice from the reckoning and thus, attempts to absolve responsibility. Notwithstanding the success of this line of reasoning, the objective was to assert that if there was no responsibility, then there was no duty to help those in need and thus, no need for any moral urgency to help the Jewish people in Europe.

The fatalistic reasoning Eastman employs probably did not have as much resonance with the American people as did his critiques of the screamers wherein he claims that they “do not tell us specifically what they want us to do.”[x] This was the claim that founded his assertion that the screamers are calling for an “emotional regurgitation” instead of educated correct moral responses. Eastman ends this particular critique by appealing to the fact that his sons were in the war fighting the Nazis and that the screamers were non-combatants armchair moralizing, but not assuming any of the risks. What is revealed through these connections, in correlation with what has already been mentioned is that Eastman blames the Jews and the screamers for an imposed duty to risk life and limb for a people and a cause that it was not their responsibility to do so for. In the broader context, even given the anit-Semitic sentiments that existed within the United States in the 30s and 40s, the lack of moral urgency was more an outgrowth of the lack of moral responsibility than prejudice alone.

At the heart of the issue of the American apathy concerning the oppression and extermination of the Jewish people in Europe were conflicts with trust. At first it was the unbelievable characteristic of the reports coming out of Germany and Europe, but many of those reports were verified and still people continued to remain skeptic about the severity of the problem and their responsibility in the situation. Noted above, Eastman made two claims; that the screamers made no specific demands and that they were also non-combatants, and while that may have been the case for many, it was not always the case. Varian Fry was an American journalist who volunteered with the Emergency Rescue Committee in France in 1940 and also created an underground network to help Jews escape Nazi extermination; was what Eastman would consider a screamer.[xi] Thus, it is not the case that the screamers were not taking risks and responsibility, but were in fact acting on their convictions while simultaneously calling on others to act as well.

In 1942, Fry wrote “The Massacre of the Jews,” which moves from being accommodating and understanding of why skepticism exists, and transitions to condemning with focused anger the apathetic and skeptical American population and government. Important to note in this account is the list of specific actions being requested that Eastman claims does not exist. Fry calls on President Roosevelt and Churchill to make public statements and to “speak out again against these monstrous events.” Fry also screamed for the development of Tribunals to “amass facts,” for Diplomatic warnings to be issued to the countries in the Balkans region, for the Allies to form a blockade, to provide asylum for refugees, and to feed the Jews in the occupied territories. He also called on the Christian churches, the Protestant Leaders and the Pope, to excommunicate and condemn anyone who assisted the Nazis. Lastly, Fry suggested that any efforts that are made should be broadcasted and made public because the Nazi actions required secrecy and hoped to “create resistance” and foster “rebellion” among the people. This is a very specific list of things that can be done to assist the Jewish people and hardly any of them hint at combat, and this also shatters the conception that the “screamers do not tell us specifically what they want us to do.” What is revealed is that the America population was not listening to the screamers and chose to label them as war mongers as a justification for not assuming responsibility and displaying the moral urgency necessary to prevent or end the mass extermination of the Jewish people in Europe.[xii]

This account should not be taken to mean that Americans did not play a pivotal role in WWII and the liberation of the Jewish people from the Nazi concentration camps and occupation, because that is not true. This account was meant to convey a portion of the complex and disparate moral and ethical views of Americans in the 1930s and 1940s by analyzing their own words and setting them into context with one another. By doing so, I hope this exposition has challenged the pseudo history that presents the decision to go to war as a simple and contradicted it. There is great sacrifice in going to war for any reason, especially when it is for another country and people. Not only was Nazi campaign unprecedented in history, but so was the Allied response to Hitler’s Nazi regime, and it had to be justified both to the United States Congress and the American citizenry. For some, the mere numbers, methods, and length of time of the oppression and extermination of the Jews were enough justification to warrant the moral urgency. However, others were either, reluctant to believe, felt the need to wait, or were not willing to sacrifice the resources and lives necessary for a people they did not feel obligatory duties towards. The volume of people killed and the scope of the Nazis’ plans brought the ethical dilemma; “to kill or let die,” to the surface, wherein America’s apathy was indicted for being; “accessories to the crime” as Kirchwey says and thus, responsible to act with moral urgency.

[i] Eastman, Fred, “A Reply to Screamers,” Christian Century, February 6, 1944.  American Views the Holocaust 1933-1945: A Brief Documentary History, Edit. Robert H. Abzug (Boston: Bedford/St. Martin’s, 1999), 171.

[ii] Eastman, 170-174.

[iii] Eastman, 173.

[iv] Eastman, 172.

[v] Eastman, 171.

[vi] Kirchwey, Freda “While the Jews Die,” Nation, March 13, 1943. American Views the Holocaust 1933-1945: A Brief Documentary History, Edit. Robert H. Abzug (Boston: Bedford/St. Martin’s, 1999), 152-155.

[vii]Kirchwey, 153.

[viii] Eastman, 172.

[ix] Eastman, 172.

[x] Eastman, 172.

[xi] Fry, Varian, “The Massacre of the Jews,” New Republic, December. 21, 1942.  American Views the Holocaust 1933-1945: A Brief Documentary History, Edit. Robert H. Abzug (Boston: Bedford/St. Martin’s, 1999), 126-127.

[xii] Fry, 132-133.

Global Government: A Remedy to Collective Action Problems

All the states and all the individuals on the earth share one planet, with one finite pool of resources that everyone depends upon, yet there is no ultimate authority which has the jurisdiction and enforcement power to manage that finite pool of resources. Currently the world is composed of sovereign states that claim to have jurisdiction and enforcement power over their citizens and their territories, and they expect other states to honor the principle of non-intervention, thereby leaving each state to act autonomously in its own interest. However, I believe that each individual on this planet has an obligation to more than just the citizens of the state they happen to be from precisely because we all share a finite pool of resources, so each individual is responsible for how they use those resources because they directly affect everyone’s ability to use those resources. However, as will be shown, if states are allowed to remain autonomous, then there is greater incentive for each state to act in its own interests as oppose to cooperating with the other states to manage our finite pool of resources. For these reasons, I believe that we have a moral obligation to create a governing institution that has jurisdiction and enforcement power for the entire globe because there does not seem to be another way to manage our resources effectively.

The planet and all of its citizens are faced with problems that supersede the jurisdiction and enforcement power of any individual state or group of states, and currently there is no governmental agency or entity with the authority to mitigate these problems. According to David Held, since the signing of the Westphalia treaty in 1648, states have operated on two principles; sovereignty and non-intervention.[1] Held also goes through great effort to establish the point of globalization by showing that as states have expanded, populations have grown, and new technologies have emerged; that the decisions and actions now made and taken have impacts that increasingly cross borders  and affect more than just the citizens of a self-contained, sovereign state and its citizens.[2] The easiest example and perhaps the least refutable example of this phenomenon that can be made, is in the case of pollution, or environmental effects. Peter Singer notes in his chapter One Atmosphere,  “that Britain’s Sellafield nuclear power  plant is emitting radioactive wastes that are reaching the Norwegian coastline,”[3] which although is just one example, should serve to establish that the actions of one state can and often do affect other states. Yet, while Singer uses this example to show that there is an international law which allows suits to be brought against states for affecting other states, there still remains yet to be over-arching jurisdiction and enforcement power to stop such actions from happening in the first place. The current situation between states resembles a Prisoner’s Dilemma, and I think the problem is the structure of the system because when every individual and state is in competition for limited resources (land, fuel, energy, potable water, clean air, food, etc…), even though the best outcome for all is to be found in cooperation, there is no reason to trust that the other agents will not defect and “free-ride” on the efforts of the rest.[4]

The most practical and imaginable form of world government in the current political environment of the 21st Century, is a federation of states, or as Held called it, a “cosmopolitan community,” a democratic community of democratic communities.[5] States should exercise jurisdiction and enforcement power over the territory and the citizens they represent, and the federation should have jurisdiction and enforcement power to regulate the interaction between states and any action that may either, be taken by a state or the citizens of a state, that will have an impact beyond the state’s immediate jurisdiction. The closest contemporary example of what this federation could look like is the European Union, which has an EU council (representing states) and an EU parliament (representing citizens), but the states also retain absolute veto power.[6] At all levels of the federation, the federation would operate by democratic principles, wherein representatives are elected by the group they are directly responsible to.  And policy decisions would made by what Peter Singer called the principle of “subsidiarity,” whereby issues are managed “at the lowest level capable of dealing with the problem.”[7] Such an institution would thus have democratic accountability and the authority to address and mitigate the collective action problems individual sovereign states are now faced with.

However, there are obvious moral problems with forcing states and the citizens of those states to become a democratic society because it supersedes their right to plan their own future; their right to self-determination.  And as John Rawls, in The Law of Peoples (1993) identifies, there is the capacity for states to develop as “a well-ordered hierarchical society,” or in other words, not as a liberal society wherein all people are free and equal, but are not expansionists and the government derives its legitimacy from its citizenry.[8] The point Rawls made with the discussion of “well-ordered hierarchical societies” is that there are forms of government and society that do not fit the democratic model, but no less deserve to have their rights to self-determination respected. After the end of World War One, the British and French Mandate systems established at the Sam Remo Conference of 1920 were imposed on the Middle Eastern countries of Syria, Jordan, Lebanon, and Iraq. Under the guise of Wilsonian “Self-Determination” and a civilizing mission, Britain and France claimed to be assisting these Middle Eastern countries to become self-sufficient democratic societies, but did not anticipate the severity of opposition from the people in these diverse countries. The mandate system drew arbitrary borders and set up unequal systems of representation that did not represent the population, so, not only were people forced into political debate with others they had previously not debated with, but they also felt the pains of an unequal distribution of power that was primarily located in the hands of a minority population. Essentially, the problem was that the mandate system created a series of governments that had not achieved legitimacy because the people themselves neither selected the systems of government, nor the representatives. The imposition of Western forms democratic government upon hierarchical societies, was a recipe for disaster and led to a series of revolutions and counter revolutions in many of the countries that left thousands dead and futures uncertain.[9] Yet, my concern here is not how to influence or encourage non-democratic governments and societies to become democratic societies, it is to derive whether we have a moral obligation to form a global government. So, for the sake of argument, I will assume that societies and governments have not been coercively forced to become democratic, but have rather chosen of their own free-will as agents who have exercised their right to plan their own futures to become democratic societies.

Most proponents who believe that human rights and justice are important also believe that a democratic form of government is necessary to achieve those ends however they may differ in opinion in regard to the structure that government should assume. Some like, Will Kymlicka, while acknowledging that globalization is occurring, challenges the conception of a cosmopolitan citizenship by suggesting that although, “a new civil society” is emerging,” it has not yet produced anything that we can recognize as transnational citizenship.”[10]  The hinge-point of Kymlicka’s argument rest on his assertion that “democracy is not just a formula for aggregating votes, but is also a system of collective deliberation and legitimation,”[11] and since he believes that people decide to deliberate and share the “blessings and burdens”[12] of those political deliberations with people who share similar histories and circumstances, a cosmopolitan citizenship is not practical at this time because people will either choose not to participate or will be incapable of deliberating  on that broad of a scale. If this is true, then the system will fail to meet the necessary conditions of a democratic society of free and equal persons contributing to deliberations because only those who could communicate in a broad range of languages and felt comfortable enough to debate political issues would be party to the decisions made, and as such would not be just Kymlicka believes that making individuals citizens of a world government before they are ready to form one would undermine and potentially ruin the democratic process, which would entail not achieving the objectives of human rights and justice. Kymlicka does however assert that we should, as a civilization, be progressing toward a more cosmopolitan citizenship, especially in terms of the “principles of human rights, democracy, and environmental protection,” but does not believe it is achievable in our life –time.[13] He argues that although a greater territorial range of voters may influence a global government in some way, that the government “would cease to be accountable to citizens through their national legislatures,”[14] where democracy is “more genuinely participatory,”[15] it would essentially, form a tyranny of the majority over the minority. Kymlicka argues that the citizens who cannot take part in the cosmopolitan debate only have their local governments to appeal to, but if that local government’s authority is undermined and superseded by a transnational vote, then they lose the only agent capable of representing their interests internationally. Thus, unlike Held who argues in favor of a “cosmopolitan citizenship,” Kymlicka believes in a parochial democratic institution of government because states are not only responsible for, but are also accountable to and are thus responsive to their citizens’ needs. Whereas, a global government cannot be because the system has the inherent flaw of denying minority populations a voice in the decision making process, which would have the unintended effect of making the system unjust.

Kymlicka’s argument has merit and it is founded on a keen observation of human behavior and the desire to invoke the right to freedom of association. Perhaps the most pressing moral justification for individual states retaining autonomy, as opposed to a world government, is the concept of “Communities of Fate,” first presented by Held and furthered by Kymlicka. According to Kymlicka, “[p]eople belong to the same community of fate if they feel some sense of responsibility for one another’s fate, and so want to deliberate together about how to respond collectively to the challenges facing the community.”[16] Kymlicka draws the conclusion that it is impractical, and potentially impossible to expect individuals to be citizens of a global democracy because of the factors which constitute a “community of fate,” and because of the requirements for full democratic participation. The most compelling argument that he makes concerning the factors of a “community of fate” is about the language that people share, of which he argues that average people prefer to deliberate democratically in their native tongue and will opt out of multilingual transnational democratic deliberation.[17] This being the case, then the state, or as Kymlicka puts it, the “nation,” would feel the obligation of a special social contract with its citizens that it does not with the citizens of other states and as such, would be the most appropriate authority to have immediate jurisdiction and enforcement power.

Kymlicka concludes his argument by stating; “our democratic citizenship is, and will remain for the foreseeable future, national in scope,”[18] suggesting that a global government is unjustifiable because it is impractical at this time. However, even Peter Singer, who is a huge proponent of a global government with jurisdiction and enforcement power, agrees that we should not rush into federalism and instead suggests “a pragmatic, step-by-step approach to greater global governance.”[19] So, while Kymlicka successfully argues the point that “democratic politics is the politics in the vernacular,”[20] wherein citizens debate in the self-interest of their own nations and states, with those who speak their language, and without undermining the state’s accountability to its citizens; by drawing from his redefinition of “community of fate,” it can be shown that people within one state are more likely to feel a sense of obligation to citizens of their own state as oppose to another because they share a common identity, and I also think it serves to bolster Held’s concept of “multiple citizenship” within a “cosmopolitan model of democracy.”[21] Because if the global government was structured to operate on the principle of subsidiarity that Singer promotes, then citizens would still debate in the vernacular in parochial democratic bodies, and  function as democratic members of the cosmopolitan community managing issues greater than the states’ jurisdiction. Thus, it does appear that a practical and responsive global government can be conceived, and to potentially be structured to function in a way that responds to Kymlicka’s very serious and relevant concerns. What remains is to establish what obligation we as citizens of this planet have in terms of a global government.

Kymlicka astutely asserts that people in a community of fate feel a unique obligation to one another that they do not share with people of other communities of fate. However, entailed within that assertion is the implicit claim that there is not currently a global community of fate, and I disagree with that assumption because I believe it can be shown that we do. It would nonetheless be foolish to assume that we feel the same obligation to people that we have never met as we do with residents of our neighborhood. Yet, although a person living in Tripoli would not be obligated to prevent or promote the construction of a road in Denver, or vice-versa, because the construction of a single road is of little consequence to the other party, promoting or preventing total world annihilation, say from nuclear war, is quite a different story because the consequence of such an event is of great consequence to both parties. This makes intuitive sense because people tend to acknowledge that we as citizens of the planet have an obligation not to unjustly deprive people of their right to life. So, while Kymlicka is correct in asserting that parochial communities of fate share certain obligations, it is also the case that there are varying degrees of obligation depending on the circumstance in question. Thus, when issues have global ramifications and something can be done to prevent that which we share obligations to prevent; those who have a choice in the prevention of it also have the responsibility to prevent it.

If my argument has thus far been sound, then it follows that we as citizens of the planet have a responsibility to prevent the types of collective action problems that pose a threat to the entire globe. If that is the case, then we are responsible for the creation and implementation of some type of institution that can adequately prevent those collective action problems. While it is the case that states as sovereign entities do have the capacity to adequately address certain types of collective action problems, as has been shown, states also fall victim to the prisoner’s dilemma and have more incentive to defect than to cooperate and as such, are inadequate for addressing problems that pose a threat to the entire globe. That being the case, then the most practical alternative is to create a world government that has jurisdiction and enforcement power over states to address the types of collective action problems that pose a threat to the entire globe. An objection may be made here, one which appeals to transnational institutions, but transnational institutions have historically lacked the jurisdiction and enforcement power necessary to address these threats because states have been reluctant to relinquish their autonomy. Thus, unless we as a global community of fate decide to enhance the jurisdiction and enforcement power of “neutral third-party” transnational institutions, the only other viable option at this time to meet with our obligation is the creation of a world government composed of a federation of states, or as Held called it, a “cosmopolitan democracy.”

 

[1] Held, David. The Transformation of Political Community: Rethinking Democracy in the Context of Globalization, 87.

[2] Held, 92.

[3] Singer, Peter. One World: The Ethics of Globalization (One Atmosphere), 20.

[4] Gardiner, Steven M. A Perfect Moral Storm: Climate Change, Intergenerational Ethics and the Problem of Moral Corruption, 399.

[5] Held, 106

[6] European Union. (http://europa.eu/eu-law/decision-making/procedures/index_en.htm)

[7] Singer, Peter. One World: The Ethics of Globalization, (A Better World), 200.

[8] Rawls, John. The Law of Peoples (1993), 530.

[9] Cleveland, William L. and Martin Bunton. A History of the Modern Middle East. (Westview Press, 2013), chapters 9-13.

[10] Kymlicka, Will, 125.

[11] Kymlicka, Will, 119.

[12] Kymlicka, Will, 115.

[13] Kymlicka, Will, 125.

[14] Kymlicka, Will, 124.

[15] Kymlicka, 120.

[16] Kymlicka, Will, 115.

[17] Kymlick, 121

[18] Kymlicka, 125.

[19] Singer, Peter. One World: The Ethics of Globalization, (A Better World), 200.

[20] Kymlicka, 121.

[21] Held, 107

The Significance of “Black Friday”

One of the coolest gifts of being in school is that I get to learn about our world, what we have done, what we are doing, and what we have the capacity to do as human beings. I think one of the freshest aspects of studying history is that I have the opportunity to learn facts and concepts that have shaped our civilization. And then as a cap to all of that, I have been granted the privilege to evaluate that information and those assertions with my studies of philosophy, whereby I am learning how to use and design moral frameworks from which I can evaluate the implications of what has been done and what “should” be done in the future in terms of what is justified and what is obligated of human beings; and I can base my interpretations in historical fact.

Last night I came across term Black Friday in my history textbook; “A History of the Modern Middle East” (William L. Cleveland, 2013), and it was a tragic scene in Iranian history. And before I make it seem like this is to present a negative perspective of Iran, or any Middle Eastern country, what I am going to tell you about this event has occurred in some fashion in every culture, nation, state and society that I have studied so far. As it turns out “Black Friday” was a term used to describe the response of Muhammad Reza Shah’s regime to a large mass of unarmed students, workers and other civilians protesting the actions of the regime. On Friday, September 8, 1978 Reza Shah’s regime marched tanks, helicopter gunships, and army into the crowds and killed hundreds of unarmed civilians to quell the protesters and silence them.

After reading that, I questioned when the term “Black Friday” was coined and why because as I am sure most of you are aware of it is associated with the Friday that follows the American holiday Thanksgiving, that occurs on the fourth Thursday of November. (The point of this post is not to call into question the moral implications of that holiday, that will be for a later post.) The contemporary meaning of Black Friday, according to blackfriday.com, since 1924 and the Macy Thanksgiving Day Parade, has marked the beginning of the holiday shopping season wherein companies move from the “Red into the Black” a term used to signify an end to making a loss and earning a profit.

I fact-checked those claims with snopes.com and found that the term was coined in 1951, in reference to employees calling in sick to work the Friday after Thanksgiving Day. The site further notes that in the early 1960s in Philadelphia the police termed the traffic problems related to the shopping in the metropolitan district as “Black Friday”. Snopes.com also confirmed the usage of the term that blackfriday.com mentioned in regard to it being the beginning of the holiday shopping season. Snopes.com however, discredited the claim that “Black Friday” was a term coined to descibe a special business day for the selling of slaves in the 19th Century.

However, in all of this research, I did not see any reference to what occurred in Iran in 1978 under the rule of Reza Shah of the Pahlavi Dynasty. My initial concern was that here in America we could have transmitted a term used to define an atrocity to mean something different, that is actually celebrated and rewarded annually. The transition in the meaning of terms is not something that is unheard of. Many of us in the United States are either familiar with or use the term “A Rule of Thumb,” to mean a general rule of operation, or in other words a maxim, but most of us do not know where it comes from. Nonetheless, the “Rule of Thumb” refers to the legal rule that if a stick was not wider than the diameter of the husband’s thumb, that he was legally justified in beating his wife with it. This was the grounds for my concern and what motivated my research into the etymology of the term. And I have been able to clear up, that the usage of the term to describe the initiation of the holiday shopping season predates the atrocity in Tehran, Iran by more than ten years.

Before leaving you all, I would like to briefly comment on the significance of the 1978 event, wherein the regime utilized the military to suppress the voices of the people who were expressing discontent. As I mentioned earlier, this is not something that is contained only to Iran, or the Middle East. We have to only peer into U.S. history and we will be acquainted with the suppression of African Americans during the 1960s Civil Rights Movement, who were voicing dissent and the police riot guards were called in to suppress them. Or more recently, when the protesters participating in the Occupy movement were suppressed to start to form an idea that suppression of dissent is not something that only happens outside of the United States, or is contained to the distant past.

As citizens of this world, no matter what country we live in, whether it is a democratic state or it is hierarchical, or its state government is based on the observance of religion, or a monarch; the consistent pattern is that when the voice of dissent is suppressed it lead to outcomes in that nation or state that are undesirable to population as a whole. Sometime suppression is more implicit than armed forces marching into the metropolitan area of a city and murdering hundreds of civilians. One type of power concerns the control of the agenda. This is important because even in a democratic society wherein the people are “allowed” to have a voice, if the agenda of what they can voice an opinion about is constrained, then those in power with the motivation to protect that power can situation that agenda to ensure that the issues that most threaten their position are never brought up to vote upon.

The United States is largely a consumer society that bases much of identity in Spending Power or the prestige that comes from possessing such power. Furthermore, in a society wherein Conspicuous Consumption, which signifies that status symbols (clothes, cars, watches, etc..) are used to delineate social class and thus power, the citizens of such a society have a vulnerability that can be exploited by those in power. To connect this to the previous ideas of the suppression of dissenting voices and controlling the agenda, when the elites can focus the populace’s attention on consumerism, attaching their self-worth to how much they can buy (Social Trappings), they can effectively control the agenda. If this line of reasoning is accurate, then the citizens of the United States are systematically having their dissenting voices suppressed by consumerism.

So, while it may not the case that the term “Black Friday” was explicitly designed and coined to represent the oppression of people and the suppression of dissenting voices, it is nonetheless clear that an argument can be made to support the claim our voice of dissent can be suppressed by such means.

And to think, that all of this thought came from one paragraph in my history textbook… Yeah, I love school. I decided to go to school to get an education and what has occurred is that it has changed the way I think about the world. I am now being armed with the skills and the knowledge to evaluate the world we live in. And this is precisely the reason that I decided to go to school.

http://blackfriday.com/pages/black-friday-history

http://www.snopes.com/holidays/thanksgiving/blackfriday.asp

“A History The Modern Middle East” by William L. Cleveland; 2013

The Relevance of Philosophy in Both Global and Domestic Debates

People quietly dismiss the relevance of Philosophy but proceed to complain about the state of the world and the state of our relationships with each other because we tend to hold others or feel that people share or “should” have some form of moral responsibility to others.

One argument against philosophy as a discipline for defining our moral responsibilities is for religion’s capacity to perform that function. Yet, with all of the contradictions found not only in one religion but between the religions of the world that becomes an exceedingly difficult argument to justify and support.

This however, would still prove to be of great benefit if we were not confronted with globalization wherein groups interact. In that type of situation the moral obligations of individual groups tend to conflict with one another, which is why there is so much tension of what people or nations are morally responsible to do or to abstain from doing.

Philosophy, at least as much as I understand it thus far, when it is concerned with morality and ethics seeks to define an over-arching ethical framework that transcends those boundaries. And is why I believe that Philosophy should not just simply be dismissed, aside from the fact that we all seem to practice and respect the fact that moral responsibility is important.

When Justification Is Not Sufficient

I think it is ironic how people pick and choose what of a religion to honor when it suits them to do so, but ignore the parts that do not fit so well with their perspective. I have studied several cultures who practice several religions and I have repeatedly encountered the sobering fact that regardless of what is written it is interpreted in many ways by many people. Not that I am not arguing that religion is bad, here I am just asserting an observation: If a person looks hard enough, they will find some line or other in a religious text to justify an act; or likewise to un-justify an act.

The point is that there is a difference between practicing a religion and “using” a religion and religion can be highly dangerous when it is used, no matter what religion it is.

For example: It was used to justify slavery, to justify the subjugation women, to justify colonization, to justify witch hunts and burning, and now it is being used to foster hatred of people who choose or are with someone of the same sex. In all of these cases religion has been “used” as a tool to justify some form of oppression which drives a wedge between human beings. To be fair, religion has also been the impetus for much good in the world, but for the sake of this conversation wherein the propagation of hostile views are being justified by religion, I feel that these points bear a lot of weight.

Regardless of the justification for it; hatred is still hatred and oppression is still oppression. Religion, as a moral standard does not give people a blank check for their behaviors, they are still morally responsible for their actions however ‘right’ they believe themselves to be.

To Kill or to Let Die: That is the Question

In this essay I will be comparing and contrasting two ethical frameworks to ascertain both their relevance and effectiveness in deciding how to choose an action in a given situation. The two ethical frameworks being considered are virtue ethics as described by Aristotle and deontology as described by Immanuel Kant, and they will both be used to analyze a moral dilemma concerned with the theme of killing or letting die. However, before evaluating the dilemma it may be prudent to summarize the ethical frameworks first.

Virtue ethics is an agent-centered ethical framework through which its practitioners seek to both determine and to develop the morality of individuals. Whereas an action-centered ethical framework such as Deontology is concerned with the act in which an agent engages, an agent-centered framework is concerned with the character of the agent. According to Rosalind Hursthouse in Normative Virtue Ethics, a virtue is “a character trait that a human needs for eudaimonia, to flourish or live well” (p. 130). Virtues such as wisdom, honesty, compassion, loyalty and justice, when practiced, aim the agent’s actions at achieving this eudomainonia or happiness, which Aristotle believed was the “chief good” and the ultimate end of all pursuits (p. 118). According to Aristotle, a moral person is one who has the ‘habit’ of acting in accordance with virtues or excellences (p. 120), as oppose to their converses which are considered vices and are thus immoral.

The practitioners of deontology on the other hand seek to classify the morality of the decisions and behaviors of an agent. This action-centered ethical framework is neither concerned with the consequences of an action, nor of the character of the individual who acts. Rather, it is concerned with the reasoning that precedes and compels an action.  Immanuel Kant established what he termed the Categorical Imperative in his work titled Grounding for the Metaphysics of Morals, which is a rubric for evaluating the morality of a given action. According to Kant, an action is only moral if it is done from duty, not concerned with the consequences of the action but a particular maxim (an intention or policy of behavior), and that the action is necessitated out of respect for the law (p. 107). The law, according to Kant, is determined by subjecting each act to the Categorical Imperative; where the universalizability of the act as a law is considered first, and if it can be universalized then it is determined whether the act will use any human as a mere means or also as an end (Woody Lecture Notes, Nov. 14). If the act passes both of these tests then it is considered to be moral.

The moral dilemma that I will consider through both ethical frameworks is as follows:

A group of four friends, all in perfect health and doing well in college, are on a hiking trip when they are overrun and captured by a gang bent on testing the limits of human morality. The gang randomly selects one of the four friends and gives her an ultimatum; either kill one of her friends, and her the other two will be set free, or all four of them will be killed by the gang.

At first glance, from both ethical frameworks, if she chooses to kill one of her friends it will be an immoral decision. The virtue of justice will not permit the violation of a person’s right to life, so it is not moral to kill from the stand point of virtue ethics. The act of killing a friend to save one’s self is using the friend as a mere means so, it is also not moral for her to kill in this instance from a deontological standpoint. At first glance it appears that the only moral option is to omit killing one and to let them all die.

However, if she omits killing one to save her and two friends from being killed, there is also the potential that she is making an immoral decision.  If by omission she violates the virtue of compassion for the two friends who would otherwise be saved by her killing the one, then here the lack of compassion is immoral. This is a potential interpretation of virtue ethics as proposed by Aristotle because a hierarchy of virtues is not provided discerning which virtues take precedence over the others. She also has a duty to help those in need when she has the capacity to do so. By omitting to kill the one she thus lets her and her friends die, she would make the immoral decision of not helping her friends, who have the need to not be killed, that otherwise could avoid being killed, if she were to kill the one. Thus, at a second glance it appears that of either decision she can make that neither is moral from either ethical framework. However, there may be a way to reconcile one or both of the frameworks with the current situation so that a moral decision can be derived.

Both virtue ethics and deontology are situated such that they are capable of addressing the nuances of individual situations, so it may be possible to illuminate a virtue or a universal maxim to derive a solution to this moral dilemma. If, a universal maxim could be derived as such that; if required to kill one to save three, then kill one, if and only if, all four sincerely agree to kill one and the one is agreed upon by all four; then kill one to save three. Wherein neither the one being killed, nor the one killing is being treated as a mere means because each is being respected “as a rational person with his or her own maxims” and they are also “seek[ing] to foster other’s plans and maxims by sharing in their ends” (O’Neill, p.  114), the ends being saving three. The problem with this solution however, is that although it has met with all the conditions for the Categorical Imperative, as it is both universalizable and avoids the treatment of anyone as a mere means, it nonetheless reveals that if this specific of a maxim can be morally permissive just because the situation deems it necessary, then the ethical framework is not restrictive enough to delineate between moral and immoral actions. In other words, it does not seem moral to be able to make up laws for each new situation because if it were the case that this was permissible then anything could potentially be rationalized as a moral act.

The outcome of the analysis of these two ethical frameworks, Virtue Ethics and Deontology when applied to the dilemma of kill or let die, has revealed that there is no decision that is more moral right than another. The point of an ethical framework is to help us humans figure out what we ought to do when we encounter dilemmas and however improbable this situation is to arise in most of our lives, it is nonetheless possible. Furthermore, variants of this scenario wherein a group has to decide between killing a minority of the group to save a majority of the group are perhaps more frequent, and in those situations the constraints of this scenario may still hold. This scenario has revealed that when a group is confronted with a situation like this, where a choice has to be made between killing or letting die there is no simple answer, no quick fix, no easy solution, and there may not be a correct answer. What is important to note, is that while these ethical frameworks have proved to be inadequate at deciphering a more morally right choice in this scenario, they nonetheless show that we are morally responsible for our actions and that decision of this magnitude cannot be made lightly.

Works Cited

Aristotle. “Selections from the Nicomachean Ethics.” The Elements of Philosophy:

Readings from Past to Present. Ed. Tamar Szabo Gendler, et al. New York: Oxford University Press, 2008. 114-127. Print.

Kant, Immanuel. “Selections from Grounding for the Metaphysics of Morals.” 1785. The

Elements of Philosophy: Readings from Past to Present. Ed. Tamar Szabo Gendler, et al. New York: Oxford University Press, 2008. 105-111. Print.

O’Neill, Onora. “A Simplified Account of Kant’s Ethics.” 1980. The Elements of Philosophy:

Readings from Past to Present. Ed. Tamar Szabo Gendler, et al. New York: Oxford University Press, 2008. 112-114. Print.

Woody, Andrea. Philosophy 100 Lecture. University of Washington, Seattle, WA, November 14, 2013.