Being Human. Chapter 2. Cultural And Social Dimensions Of The Self

A group of international students is sitting around the dinner table discussing the television menu for the evening. A Norwegian woman student says, “let’s watch the soap, exciting things are happening to the relationships in the show”. A student from Asia disagrees since soaps “show disrespect for social values and relationships”. Someone from the States suggests watching a boxing match since that “demonstrates personal courage and achievement of the up and coming athletes”. The Asian student replies that rather than boxing, watching a team sport like soccer is more interesting. Another supporter of the soap option however, suggests that soap dramas are much more exciting as they deal with relationships, and “that is all there really is to life”.

Cultural and gender stereotypes that are parodied above are addressed in this chapter. Our social selves are partially defined by gender and cultural values, and much else. How do we come to be who we are? How is the self formed and what function does it play in the psychological economy of the individual? Are we motivated to behave in certain ways depending on our social selves? What is the route to well-being; does it help to have illusions about life? Why do we spend so much time and effort trying to impress others, and is impression management adaptable? These and many other issues are discussed in this chapter.

Who we are and where we come from has engaged the attention of philosophers and psychologists for generations. In more recent times the methods of experimental social psychology have been employed in the quest to understand the self and its dominant attributes. The self is defined as a set of beliefs we hold about our attributes and ourselves. We think of ourselves in terms of important personal characteristics like our career choice, our level of competence, and our plans for the future. The latter defines our possible selves. The continuity we feel in life is due to the self-concept. Similarity in personality with siblings, and especially identical twins, is based on common biological heritability that also contributes to self-hood.

Everything important about our lives, our family relationships, our development, the cultural and social context of our lives, all contribute to the topic of this chapter. Self-knowledge provides direction and order in our lives. Since we all fall short in goal attainment, there is a balance between flaws and self-efficacy. These discrepancies directly impact how we feel about ourselves, our self-esteem. Since feelings of self-esteem are also bound up with how others think about us, we perform in the great theater that is life, playing out roles of self-presentation. We want to convince others of our positive qualities and therefore have strong motives to manage the impression we make. We know how to react appropriately to varying situational demands because culture creates the parameters of appropriate conduct.

1. The beginnings of the social self
Self-awareness begins early in life. By about nine months of age the average child begins to differentiate the self from others (Harter, 1983). At the age of 18 months the typical child will have a developed sense of self-awareness such as reacting more to pictures of themselves than to those of unrelated people. Gradually as our self-knowledge grows, the primitive sense of self takes on other attributes. Our environment may nurture positive self-attributes leading to feelings of competence or self-efficacy. Others not so fortunate live in restrictive environments that place early limits on what is considered possible, and therefore affect plans for career and development. We are not the only species to demonstrate self-awareness (Gallup, 1977; 1997). The experimenter initially placed a mirror in the cage of chimpanzees until it became a familiar object. Afterwards the experimenter placed an odorless red dye on the animals’ ear or brow. The animals recognized that something had changed and responded with immediately touching the area dyed. Studies with dolphins and other animals demonstrate a similar pattern of self-recognition (Mitchell, 2003).

1.1 Self-knowledge
Using similar techniques with toddlers, researchers found that self-recognition is present at around age two (Lewis, 1997; Povinelli, Landau, & Perrilloux, 1996). Over time the child begins to incorporate psychological attributes including more complex feelings and thoughts. Our social self is inseparable from how we are evaluated by others (Hart & Damon, 1986). As we develop more complex beliefs and feelings about the self, we also begin to project ourselves to some degree into the future. From these initial experiences with the family, educational system, and the broader culture the social self gradually emerges. The self-concept is the knowledge we have of ourselves, that we exist separately from others, and have our own unique properties. As part of our self-knowledge we develop a belief system that governs behavior. Do we live in a world of chaos or order? Do we believe we can accomplish important goals? Can other people be trusted? Is it a dog-eats-dog world, or are there valid altruistic behaviors. This complex web of beliefs in turn contributes to whether we approach or avoid others, our feelings of self-esteem, and whether we have a concept of what we could become in the future, a possible self. In this process of maturation children gradually place less emphasis on concrete physical descriptions of the self, and place more emphasis on complex psychological states including thoughts, feelings, and the evaluations of others (Harter, 2003; Hart & Damon, 1986).

1.2 Self-esteem
The second aspect of the self-concept consists of our self-evaluations or self-esteem. Self-esteem is evaluative based on very basic judgments of personal morality, and whether in our own eyes we are satisfied or dissatisfied with our performance. Global self-esteem can be measured by surveys and is related to our need for approval (e.g. Larsen, 1969). The lower our self-esteem the more we have a need for affirmation and approval by others and society. High self- esteem on the other hand is associated with setting appropriate goals, using feedback from others to progress, and enjoying positive experiences to the fullest extent possible (Wood, Heimpel, Michel, 2003). When experiencing rejection or frustration, those with high self-esteem will find a silver lining. High self-esteem is adaptable and is associated with goal persistence and the ability when frustrated to envision alternative goals (Sommer & Baumeister, 2002). High self-esteem people will look at the past through rose-colored glasses, and this selective positive memory bias may in turn support higher self-esteem (Christensen, Wood, & Barrett, 2003).

On the other hand those people with low self-esteem not only think poorly of themselves, but the negative self-conceptions have other unfortunate consequences. Low self-esteem persons are more pessimistic about the future, tend to obsess about their negative moods, are more concerned about the opinions of others, and have higher needs for approval (Heimpel, Wood, Marshall, & Brown, 2002). Low self-esteem is also reflected in negative estimations of competence or self-efficacy, and in self-loathing. On the other hand, those with positive feelings toward the self, like themselves and have feelings of competence (Tafarodi, Marshall, & Milne, 2003). As we shall see throughout this chapter and what follows, the cultural context matters. Members of Asian cultures, for example, are less self-enhancing in explicit ways, but enhance more in implicit ways (Koole, Dijksterhuis, & Van Knippenberg, 2001).

2. Building blocks of the emerging self
Children are not truly a tabula rasa when entering the world. Scientists have for some time found traits that seem to be universal in all cultures. Traits typically describe cross-situational consistency; i.e., the consistent way people act, think or feel despite changing circumstances. Researchers point to five traits as basic to our self-understanding. These characteristics include relative openness, conscientiousness, extraversion, agreeableness, and neuroticism, also known as the Big Five (Costa & McCrae, 1995; John & Srivastava, 1999).

People use these basic traits in describing themselves, and in judging other people. The descriptions of others tend to be accurate in the sense that they match self-descriptions (Funder, 1995; John & Robins, 1993; Watson, 1989). Many psychologists believe that the Big Five traits are the basic building blocks of personality. Is there a biological basis for these fundamental traits? The evidence is pointing in that direction since people from a variety of countries and cultures use these same traits in describing the self and other people (Buss, 1999).

2.1 The heritability of personality traits
Evidence has been produced that supports at least the partial heritability of personality traits (Plomin & Caspi, 1998). Studies of identical and fraternal twins show conclusively that trait similarity is based on shared genes. For example, studies of the personalities of identical twins show a greater similarity in traits compared to fraternal twins. Those trait similarities are reliable even when identical twins are reared apart, strongly suggesting a genetic component to some aspects of personality (Loehlin, 1992).

Often traits found early in development are consistent over the lifespan. Longitudinal studies have shown that children identified as shy at nine months develop elevated levels of stress hormone cortisol associated with fear (Kagan, 1989). Neuroticism is associated with a heightened activation of the autonomic nervous system involved in subjective stress (Zuckerman, 1996). On the positive side extraversion is related to higher levels of the neurotransmitter dopamine that is in turn predictive of approach related behaviors (DePue, 1995). Clearly the self cannot be understood apart from our biological inheritance. People react consistently to the varying manifestations of these traits. These reactions in turn play a significant role in how we develop as persons and how we develop more complex self-identities (Malatesta, 1990).

2.2 Genetics and social behavior
The relationship of genetics to complex social behavior is an exiting new frontier. Social behavior is complex and both genes and the social environment play a role. Some genes require specific environments to have an effect on behavior so interactions matter. In a study on violence (Caspi, McClay, Moffitt, Mill, Martin, & Craig, 2002) the researchers tested for the presence of the Monoamine oxidase A gene responsible for metabolizing neurotransmitters in the brain, and for promoting smooth communication between the neurons. The absence of the gene by itself had little effect. However, when combined with abuse and maltreatment the men in the study were three times as likely to have been convicted of violent crimes by age 26. Low levels or absence of the MAOA gene combined with maltreatment developed anti-social behavior in 85 percent of the boys. As we begin to see the complex interaction between our biological inheritance and complexities of the social context the interdependence of both is clear. Many of these traits were adaptive in response to evolutionary requirements. As society has also evolved many of these traits are no longer functional. Being a little fearful and neurotic might have been very functional in the days of saber tooth tigers, but create interpersonal problems for those who have inherited an excess of these traits today.

3. The nature of the self-concept: the hard and easy problem
William James (1890) is today recognized as a founder of American Psychology. In his early writings he described the essential duality of the self-concept. The first aspect of the self-concept is composed of all the thoughts and beliefs we hold about our self, also called the “known self” or “me”. The second component of the self is the “knower”. The “knower” refers to the observatory function of the self, or now more commonly called self-awareness. We come to know who we are by becoming aware and thinking about ourselves.

Today the aspect of the self defined as the self-concept or “me “is gradually being understood through experimentation. The self-concept and its relationship to brain functions is what might be called the “easy” problem. The hard problem that remains is somewhat of a mystery, is what is called the “knower”. Those with religious inclinations would refer to the “knower” as the immaterial soul. The scientist does not find that construct convincing as the soul construct explains everything and in reality nothing. The soul definition is a form of nominalism that simply puts a label or name to a process, and we do not advance much in our understanding by just placing another label on the “knower”.

3.1 The easy and the hard problem in self-definition: Me versus the knower
Freud wrote a great deal about conscious and unconscious processes. Much of our thinking is in fact accessible to our awareness. We make plans for the future, decide on what to have for dinner, save up for children’s college. These and much more are conscious in the sense that they are accessible thoughts that we can think about and evaluate. Other processes like the functions of the autonomic nervous system are largely unconscious. We know they are present in the body, but they are generally not available to the reasoning or planning functions of the brain.

The hard problem is trying to understand why it feels like we have a conscious process to begin with, that we are aware of a first person very subjective experience, the executive “I” or the decision maker (Pinker, 2007). The scientist finds it difficult to explain how this subjective feeling of the self arises from neural computations in the brain. Do you believe that all our joys and pain can be reduced to neurological activity in the brain? The hard problem is: does consciousness exist in an ethereal soul or is consciousness purely a brain function defined as the activity of the brain.

Today some cognitive neuroscientists claim that by using MRI we can practically read people’s thoughts from blood flow in the brain. Through electrical stimulation of certain areas of the brain we can cause hallucination such as hearing music played long ago, or experiencing childhood memories. Anti-depressants like Prozac can profoundly affect feelings and thoughts. Also, whenever the brain function ceases so far as we can see our consciousness comes to an end. No reliable reports of contacts with the dead have been produced. Even near death experiences where the soul purportedly departs the body only to return are probably caused by oxygen starvation of the eyes and brain. Some Swiss neuroscientists (Pinker, 2007) have managed to turn out-of-body experiences off and on by stimulating the part of the brain overlapping vision and bodily sensations. The fact that all observable psychological activity has a physiological concomitant lends little support for a soul construct.

Many visions or “miracles can be attributed to how the brain developed to meet survival needs. It appears, for example, that we posses a template for the recognition of faces in a variety of objects. Some years ago a woman made herself a cheese sandwich and experienced a vision, as she perceived the Virgin Mary in the brown skillet marks. She eventually sold the sandwich on eBay for $28000.00 probably to someone who wanted a vicarious vision. In another case people saw a three dimensional face on the surface of Mars after an orbiter captured images from the Cydonia region of Mars. That image ignited enthusiasm, and encouraged conspiracy theories about denial of life on our sister planet. All of us have had the experience of gazing into the sky and finding faces in the moving clouds. These experiences appear to be functions of three regions of the temporal lobe of the brain that is involved in the recognition of faces. The tendency to see faces is a result of neural architecture with obvious evolutionary advantages (Svoboda, 2007) In our distant past some faces or images should be avoided like that of the saber tooth tiger; others should be approached like that of family or beneficent higher powers.

The materialist explanation is advanced by the argument that the “knower” or “executive I” is an illusion. From this perspective consciousness consists of numerous or even an overwhelming amount of external events that compete for attention. As an evolutionary adaptation the brain developed decision-making functions to discriminate between important and non-essential input. Subsequently the brain rationalizes the outcome after it has occurred giving us the impression that someone was in charge. Information overload requires the decision making function of the self, and those who developed better neural webs were the ones who survived. Pinker believes that the “knower” is nothing more than “executive summaries of the events and states that are most relevant to updating an understanding of the world and figuring out what to do next” (p.65).

Damasio (2007) argues that self-awareness is a function of evolutionary biology and psychology. Initially gene networks organized themselves to evolve complex organisms with brains. Further evolution enriched the complexity of brains by developing sensory and motor maps to represent the environmental context. Eventually with more evolutionary complexity different parts of the brain developed the ability to communicate, and generate sophisticated maps of the organism interacting with the environment. From this natural knowledge the basic self emerges, and the brain’s sensory-motor maps change from non conscious mental patterns to conscious mental images. Scientists are gradually developing the ability to find neural correlates of conscious activity of the self.

However, what of the inner experience we called the hard problem? Some would simply call it information processing thereby making it an “easy” problem. Others would say that since there is no test that could distinguish between a well-designed robot, and a human, we should just let the problem go away as irrelevant (Dennett, 2007). Still others will say that our failure to understand the hard problem is a function of the limitation of our brain. After all we have many other limitations like failing to grasp the existence of spheres greater than three. Brain limitations include the difficulty of understanding how stimuli from the outside produce subjective feelings on the inside.

Many fear the loss of a moral perspective if we come to believe in a material self. After all if we do not have an immortal soul why worry about salvation in an unseen world to come? Others would argue that believing in the materialist self would increase empathy as we are all in the same existential boat. To be aware of how temporary life and consciousness is should give poignant meaning to all life and sympathy for all who struggle with the same reality. Keep in mind that belief in the immortal soul did not prevent believers from engaging in gross defiance of morality by committing genocide and cruelty. The crusades conquered land with great cruelty still remembered by Muslim zealots today. In the dark ages half a million women were burned at the state by the inquisition in an attempt to save their immortal souls. The destruction of 9/11 and what followed was largely motivated by religious morality on both sides including the belief in the immortal soul. Religious ideology often provides heavenly rewards for killing and destruction. Perhaps we would all be better off believing in a fragile and temporary existence.

3.2 The hard problem remains
At the end of the day the hard problem remains unsolved. It seems particularly difficult to understand deep feelings as solely a consequence of brain activity. Some of us have experienced awe in the presence of the truly noble and good. How can one attribute these feelings as an interpretive consequence of brain activity? The sense of unspeakable joy that comes in the wake of love, the truly altruistic behavior of others resonates in our minds in ways not easily understood by the material self. The cynic can of course reduce altruism to reward expectations, but the “knower” knows the difference. The feelings of grandeur in the presence of nature, the emotions experienced from certain types of music are examples of the presence of a “knower”. The drumbeats of the Nazi’s reflect the robotic self that resonates with martial spirit and aggression and self-aggrandizement. However, music may also cause meditation and bring to us harmony and peace. Understanding meditative feelings, altruism, and the noble as brain functions remains a hard problem.

Perhaps viewing consciousness from the perspective of brain functioning is good science, but philosophically unsound? Science has made great progress in breaking objects into atomic and subatomic particles. Is there a bias in that perspective? Are there other routes to the factual and truth? At least we know that the whole is always more than the sum of its parts. Human attributes create questions as many people feel compassion towards others. Where does that come from? If we can’t find the answer in neurons firing, then is consciousness a primary principle? Are we really illusions caused by 100 billion simmering neurons? What is the locus for experiencing ideas and intentions temporally? Do we perceive time because it is separate from us? Some parts of the self remain for life, we can recognize our basic components, but we are also aware of time and change. If we were caught up in time could we perceive it? These and many other issues remain for the most intriguing and fundamental issue of human existence.

There is a mysterious aspect to life that even the greatest minds cannot understand. Einstein too was in a state of awe by what he saw as a causal and ordered nature. Perhaps he was affected by the certainty of the subjective “I” when he wrote his credo ” The most beautiful emotion we can experience is the mysterious. It is the fundamental emotion that stands at the cradle of all true art and science. He to whom this emotion is a stranger, who can no longer wonder and stand rapt in awe, is as good as dead, a snuffed-out candle. To sense that behind anything that can be experienced there is something that our minds can not grasps, whose beauty and sublimity reaches us only indirectly: this is religiousness. In this sense, and in this sense only, I am a devoutly religious man” (Isaacson, 2007). Did Einstein address the common human limitation of our brains? Did he attribute religiousness to our inability to understand what is after all natural stimuli? Or did Einstein acknowledge with certainty that the hard problem remains, and will not easily yield a solution.

4. The development of the Social Self
How do we come to know who we are? The sources of the self-knowledge are primarily other people, although we can also learn by observing our own behavior, and by thinking about ourselves. Socialization is the context in which we form our self-attributes. It is through family and other socialization agents that we learn about our level of competence, success in achieving important goals, and whether we are evaluated positively. From that we derive self-esteem. Through socialization we acquire our standards for behavior, and we incorporate the values of our family and culture. The way we are consistently treated in early socialization forms the core of what we come to believe about ourselves that guides us throughout life.

Cooley (1902) developed a concept called the “looking glass self”. From his perspective we learn about ourselves through the reactions of other people. This is called reflected appraisals. Those who experience constant praise come to believe they are valuable; those who experience maltreatment grow up thinking their lives are worthless. So feedback from others is a basic key to understanding the social self. The importance can be seen in a study on parental perceptions and children’s self-perceptions (Felson & Reed, 1986). In general there is close similarity between parent’s beliefs about children’s abilities, and the children’s self-concept.

Later of course, we encounter peers and these have profound importance during adolescence (Leary, Cottrell, & Phillips, 2001). Most of us know intuitively our social standing from the preferences of our peers. The order in which children are chosen for athletic teams tells a lot about the person’s perceived contribution to a team, and value to his peers. Whether a girl gets asked out for dates also tells her a great deal about how peers perceive her in terms of physical attractiveness and her personality. Teachers give feedback on school performance that is either encouraging or discouraging in competitive educational environments. Competitive educational experiences using the normal curve for grading feedback do not foster growth in all children. Some children will always occupy low or failing comparative standing. These early experiences contribute to whether the individual’s possible self is optimistic or pessimistic. If we are encouraged in childhood and adolescence we form plans about what we can become, what contribution we can make to society, and how we can find self-fulfillment. We have more to say about self and motivation in section 9.

4.1 Forming the possible self through family socialization
A family has influence not only through parental guidance, but also through relationships formed with siblings. In societies with scarce resources, sibling conflict is frequent and violent. Human history bears witness to violent outcomes from Cain and Abel to current news stories. Even very young children engage in frequent conflict (Dunn & Munn, 1985). Birth order matters because children learn to adjust to certain niches in the family that is functional and rewarding. Older siblings tend to be more dominant and assertive as well as more achievement oriented and conscientious (Sulloway, 1996; 2001). The larger size of older siblings would naturally make them more dominant, and at the same time give them a greater share of responsibility to look after the younger sibling.

On the other hand, younger siblings tend to be more open to new ideas, and experiment with novel thoughts. In Suloway’s study of thousands of scientists, younger siblings were more open to novelty and thinking outside the box. On the negative side, they were also more likely to endorse pseudoscientific ideas like phrenology. Later born scientists possessed the consistency to make many scientific discoveries, whereas younger siblings were risk takers traveling far away in search of novel ideas. Darwin, for example, was the fifth sibling in his family, and developed a theory that changed physical and social science forever. He risked a great deal in his search for scientific data, traveling to unknown parts of the world to collect information in support of evolution, a theory that challenged the very fabric of our religiously founded beliefs about the origin of man.

4.2 The social self and group membership
Our social identity becomes part of our self-concept as we learn the values associated with the group membership, and its emotional significance in our lives (Tajfel, 1981). Much work has been completed in recent decades that show that mere membership even in meaningless groups attaches profound significance to behavior and self-conception (e.g. Doise, Dann, Gouge, Larsen, & Ostell, 1972). Since membership in nonsensical groups produces significant influence on behavior, how much more powerful is the influence of group identity if based on memberships in real social groups that produce attitudinal reactions by society? Members of minority groups often have confusing demands made by membership in both the minority and in coping with the larger society (Sellers, Rowley, Chavous, Shelton, & Smith, 1997). Some minorities develop bicultural competence and identity; others are assimilated into the dominant culture, and yet others are marginalized from both societies (Ryder, Alden, & Paulhus, 2000; Phinney, 1991).

Minority status has important consequences for the self-concept and esteem. As socialization takes place, the individual often engages in self-stereotyping identifying with the attributes thought positive in the group (Biernat, Vescio, & Green, 1996). Bicultural identification seems to produce the best results for self-esteem (Phinney, 1991). High self-esteem in minorities is a function of strong ethnic identity combined with positive attitudes toward the mainstream culture. It stands to reason that those with bicultural identities and competence will experience life as more rewarding, and will function more successfully in society.

4.3 Culture as a source of the self-concept
In chapter 1 we introduced the concept of independent and interdependent cultures. It is now time to apply the concept to the formation of the social self. We shall see that this cultural difference has applications throughout this chapter and in the chapters that follow. Culture has profound effects in socializing people. It produces predictable differences in self-concepts between members of different cultures. Western societies found in North America and Europe have inculcated social values significant to adaptation and survival in the capitalist model. The term “rugged individualism” points to a person who is first and foremost independent, and able to cope with the hazards of life in early United States. In this cultural environment the values of individual rights and freedoms were promoted at least formally. Each man was a king in his own house, and society was preoccupied with individual self-actualization.

In Asian societies, on the other hand, we have ancient cultures that had to adapt to high levels of physical density. Physical density is not experienced as crowding the way it would be experienced in the west, because of the highly developed structures of courtesy that meet the need for personal space and privacy. These cultural differences have been summarized in the terms “independent “ and “interdependent” societies introduced in chapter 1. Hall (1976) thought of independent societies, as “low-context cultures” where social roles while not unimportant mattered less. Therefore a person from independent cultures would more or less act the same regardless of the changing context of behavior or the situation. In interdependent cultures on the other hand, the social context matters a great deal, and the individual’s behavior will change dependent on the specific role played by the participant. In interdependent cultures the self would differ depending on role expectation. The person would behave differentially depending on whether the behavior involves a relationship with parents, peers, or colleagues. As we shall see, in western societies the bias toward independence leads to attribution errors where we underestimate the influence of the situation, and attribute behavior primarily to individual traits.

In recent years social psychologists have carried out many cross-cultural studies on how motivations, emotions, and behaviors are shaped by cultural conceptions of the self. (Markus & Kitayama, 1991; Rhee, Uleman, & Roman, 1995; Triandis, 1995). From this accumulated research the independent cultures are identified primarily in the West. In these societies the self is seen as autonomous, as distinct and separate from other members of society. The focus of the independent self is on what makes the self distinctive or different from others. Consequently explanations for behavior are sought within the individual’s personality. Not only is independence a fundamental value, but also westerners believe that the main object of socialization is to create independent children (Kitayama, 1992). The self is therefore described as composed of individual attributes (Trafimow, Triandis, & Goto, 1991). Achievements are seen as primarily the result of individual and distinctive efforts, where family or society played at best peripheral roles.

In the interdependent cultures of Asia and countries in the developing world the self is perceived as part of the larger social context. The self is not construed apart from other people, but rather as connected to family and larger social organizations. The willingness of people to go on suicide missions like the kamikaze pilots of Japan is related to the interdependent self-construal where country and emperor are part of the self. Western combatants may also fight with great courage, however that is best elicited when there is some possibility if not probability of survival. In interdependent societies the self is completely embedded in the roles and duties of social relationships. Culture therefore determines to a large extent self-knowledge and self-esteem, as well as self-presentations and impression management. The self is connected to the attributes of others, is not seen as distinctive, but associated with common traits (Bochner, 1994). These cultural differences are thought to profoundly affect how individuals think about themselves, how they relate to others in society, and what motivates their behavior (Markus & Kitayama, 1994).

Studies have shown that Americans achieve primarily for personal reasons, whereas those from interdependent societies strive to achieve group goals (Iyengar & Lepper, 1999). It is the personal nature of tasks and objectives that motivate behavior in the West, whereas Asian students are motivated more by group goals. Consequently students in the West are more likely to select careers or tasks in which they have experienced previous competence or which had been positive and rewarding in the past. The career choices of Asians on the other hand are not based on such personal expectations or prior performance (Oishi & Diener, 2003).

As we can imagine, these cultural differences in self-construal also affect how we organize information in memory (Woike, Gershkovich, Piorkowski, & Polo, 1999). People in independent cultures disregard the social context in memory formation, or think of events in personal terms. Elections in the United States are typically about the personal attributes of candidates where the social context matters little. Typically this process manipulates the indifferent electorate to disregard political programs in the search for the “right” person.

There are some researchers who feel these cultural differences in self-construal make intercultural communication very difficult (Kitayama & Markus, 1994). Yet, at the end of the day we must remember that these cultural differences are abstractions. There are always more differences to be found within than between social groups. In independent cultures there are many with interdependent self-construal, particularly among women (Cross, Bacon, & Morris, 2000; Cross & Vick, 2001). In interdependent societies there are those who’s self-construal are independent. Further, migration is changing the world. For example within United States and Europe there are many immigrants who think of themselves with interdependent self-construal. Many migrants work hard in western societies just so they can send most of their earnings back to the home country. Globalization is also producing more converging values for example an emphasis on human rights in nearly all societies, and as that takes its course in the future we must reevaluate the cultural differences discussed above.

4.4 Gender and the social self
Gender is the most obvious parameter in our self-concept. In every society males and females are treated differentially with life-long consequences. Women are more interdependent as they tend to view themselves connected to relationships as mother, daughter or wife. Their behavior therefore tends to be more influenced by the thoughts and feelings of others because relationships are construed as central to self and life (Baumeister & Sommer, 1997; Cross & Madson 1997;Cross, Bacon, Morris, 2000; Gabriel & Gardner, 1999). Women display relational interdependence in close relationships especially within the family. On the other hand men display relational interdependence within larger collectives such as political parties, athletic teams, or in feelings of national identity. (Brewer & Gardner, 1996). Consistent socialization processes throughout the world lead females to focus more on intimacy and to have a greater willingness to discuss emotional topics than men (Davidson & Duberman, 1982). These gender differences in self-construal appear consistent across cultures (Kashima, Siegal, Tanaka, & Kashima, 1992), and reflect the different functions of the sexes in the historical and evolutionary struggle for survival.

When women define themselves they use references to other people and relationships. For example when asked to show photographs they are more likely to include intimate others in the photos (Clancy & Dollinger, 1993). Women spend more time thinking about their partners (Ickes, Robertson, Tooke, & Teng, 1986), are better judges of other peoples personality, and more empathetic (Bernieri, Zuckerman, Koestner, & Rosenthal, 1994; Hall, 1984). In directing their attention toward others women also demonstrate greater alertness to situational clues and the reactions of other people, whereas men focus better on internal processes such as increase in heart rate (Roberts & Pennebaker, 1995).

How does socialization encourage gender differences in self-construal? All the agents of socialization are at work. The media portray women differently from men encouraging interdependent stereotypes. The educational system forms different expectations for appropriate goals and behaviors. Parents treat girls differently than boys from the very beginning. All these socialization agents work consistently together to establish reliable gender differences (Fivush, 1992). Throughout childhood girls and boys play in separate playgroups with girls playing more cooperatively, and boys engaging more in competitive games (Maccoby, 1990). In early human history these gender differences most likely evolved in response to evolutionary demands that rewarded survival to those who developed gender specific traits. As we are the most dependent of all species we are lucky for women’s innate desire to love and look after defenseless infants, and their very personal interests in the survival and well-being of their babies. In the following sections we will consider two theories explaining the development of the social self.

5. Social comparison theory: learning about the social self from others
Festinger (1954) proposed a theory for understanding self-knowledge. He asserted that people have a drive to accurately evaluate their beliefs and opinions. Since there are no explicit physical standards for psychological constructs we learn by comparing our thoughts with those who are similar to us. This original model has been worked over a great deal since first proposed (Goethals & Darley, 1987; Wood, 1989; Suls & Wheeler, 2000). Research has shown that people compare themselves across all imaginable dimensions including emotional responses, personality traits, and objective dimensions like equity in salary. Any relationship that makes the self salient would evoke the comparison process, our marriage as compared to other couples, our racial group compared to others for evaluating fair treatment, our fellow students for correct answers to test questions and grades, all comparisons contribute to relative satisfaction depending on comparison outcomes.

5.1 Comparing for self-enhancement or achievement
How do we get a sense of who we are without reference to the accomplishments or failures of other people in similar situations? Sometimes we seek self-enhancement by comparing downward, to someone not doing as well, and to those less fortunate. By comparing ourselves to those who earn lower grades, get less salary, or are hungry, many can at least temporarily feel better (Lockwood, 2002). Downward comparisons are especially strategic when one has experienced failure. By comparing downward and emphasizing one’s positive qualities the damage to self-esteem is reduced (Mussweiler, Gabriel, & Bodenhausen, 2000).

At other times we are interested in improvement trying to reach a relevant and lofty goal. In that case successful others can serve as models for achievement comparisons. Most of us, perhaps all of us, would not achieve the mathematical insight of Albert Einstein. However, the aspiring scientist may be inspired by his example and seek a related self-relevant high achievement. At times upward comparisons are discouraging. When the goal is truly unreachable the comparison can result in envy and feelings of inadequacy (Patrick, Neighbors, & Knee, 2004). Anorexia and bulimia are large problems in today’s society, many believe caused by the emphasis in thinness for women in the media. Nearly all models of women’s clothing are super thin, and in fact look unhealthy. Perhaps worse they set an unattainable standard for most women. (See also discussion of social influence in chapter 7). Women who place high value on physical appearance suffer in self-esteem from such social comparison (Patrick et al, 2004). In summary some comparisons can be inspirational if the goals are possible and realistic in a person’s future, but discouraging and demoralizing if they involve impossible goals or dreams.

Some people also compare from a desire to bond with others in the same straits (Staple & Kooman, 2001). How do we react to a crisis like hurricane Katrina and other natural disasters? Most of us will look to others to find the appropriate mixture of fear and courage in dealing with the situation. We also compare to similar people to enhance a sense of solidarity and common fate (Locke, 2003). When experiencing common fate people compare their responses to others to feel the strength of the community in facing crisis situations.

Social comparisons may occur in any situation of uncertainty when we are trying to find some appropriate response (Suls & Fletcher, 1983). You find yourself invited to a formal dinner party for the first time, a situation of some anxiety. Being uncertain how to dress appropriately, you ask the host for some helpful guidelines. At the dinner party chances are that you will let others more experienced carry the conversation until you get your bearings.

5.2 Social comparisons in summary
In general we seek comparisons from similar others, but if we want to enhance the self we compare downwards, if we are motivated by desire for improvement we find more successful models. (Goethals & Darley, 1977; Blanton, Buunk, Gibbons, & Kuyper, 1999). Sometimes we enhance the self-concept by comparing temporally with our former self (Ross & Wilson, 2002; Wilson & Ross, 2000). Most of us can find events from our earlier life that are more negative than our current situation. For example, perhaps we have fewer friends when we get older, but we believe that the quality of relationships has improved. To enhance we can compare our lives temporally and conclude that although the quantity of relationships has declined, life long friendships have a higher value than those formed in our youth.

6. Self-perception theory: self-knowledge by self-observation
Experience produces familiarity and most of us know how to react in situations we have visited previously. You listen to a political leader and from the storehouse of memories have ready feelings about the message and the messenger. Most people have established attitudes about a variety of topics like hip-hop music, jazz, or classical music and know how to react based on these schemas. At some point, however, you may experience the novel or unfamiliar and you are uncertain of how to respond. A stranger hands you a $100 bill, how should you react? Should you be happy or offended? If you react with joy, you may examine your reaction and conclude that you are happy. Self-perception theory (Bem, 1972) asserts that when our attitudes or feelings are ambiguous we infer their meaning by observing our own behavior as well as the situation. In other words, when we are unsure of our feelings we infer our feelings from our own behavior, how we actually respond,. You find yourself laughing in the presence of another person and conclude from that he/she makes you happy. You observe yourself kissing the person and from that and the other’s behavior conclude that you are in love. When a person is in a situation not previously evaluated, and feelings are somewhat of a mystery, often our objective behavior becomes a guide to explain these feelings (Andersen & Ross, 1984; Chaiken & Baldwin, 1981).

Secondly, in deciding the meaning of the behavior it is attributed to either the person or the situation. Is the situation compelling your behavior or is the “executive I “ in charge? If we are in control of the situation and feel in charge we may attribute the feelings to our dispositions. If, however, there are compelling pressures in the situation we are likely to attribute feelings to the situation rather than to the self. In short self-perception theory argues that we infer our feelings by observing our own behavior and infer either a personal cause or a situational reason for our behavior (Albarracin & Wyer, 2000; Dolinsky, 2000). We have more to say about self-perception and attitude formation in chapter 5.

Self-perception theory has important consequences for education and learning. For example does learning occur because of some extrinsic reward like grades? Such extrinsic reward is likely to produce short-term learning since the student feels justified to forget the learning once the reward is achieved. All the anxiety and cramming that occur in American universities is not for any intrinsic pleasure of learning, but just to pass a course or get good grades. Some children however, learn because of the intrinsic pleasure of mastering a subject. Students who are intrinsically motivated engage the subject matter because they find it interesting and enjoyable. (Ryan & Deci, 2000; Senko & Harackiewicz, 2002). Self-perception theory would argue that rewards could inhibit intrinsic motivation and destroy the pleasure of mastering the subject matter. When students come to believe that they are learning to obtain rewards it leads to an underestimation of the role played by the intrinsic motives (Deci, Koestner & Ryan, 1999; Lepper, Henderson, & Gingras, 1999). So although rewards can be motivational in the short run, they may produce external attribution that overlooks the intrinsic pleasure of learning.

It is obvious that any significant achievement occurs only where the self attributes intrinsic pleasure to the pursuit of knowledge. Students may pass courses, but little of the information learned from the reward of grade incentives will be stored in long-term memory. When the rewards cease so does the motivation to remember which is why the vast amount of information learned is lost within weeks. In one study on math games children’s performance was compared between a reward program and the follow up during which no rewards were provided. The reward program did initially produce more interest and the children played more. However, those who initially had enjoyed the games lost interest during the follow-up and played less after the reward program ended (Greene, Sternberg, & Lepper, 1976). The researchers determined that it was the reward program that caused the children to like the games less. Related research (Tang & Hall, 1995) should cause us to think about what we do to the minds of children in an obsessive grade competitive educational system.

For parents rewards can be a two-edged sword. Praise for work well done can increase the child’s self-esteem and sense of self-efficacy. It can also convey something about parental expectations for future work. But it is important that the child believes that their performance is not for external rewards but for reasons that are intrinsic and enjoyable. The child must have some control in the educational process where teachers and parents can nurture intrinsic motivation by doing enjoyable learning activities (Henderlong & Lepper, 2002). Otherwise the child comes to attribute reasons for performance to the reward system with resulting loss of motivation.

6.1 Schacter’s two-factor theory of emotion
Schacter (1964) proposed a theory of emotion using self-perception ideas. Essentially the theory proposes that we learn to infer our emotions the same way as we learn about our self-concept by observing our own behavior. In Schacter’s theory people observe their physiological internal experiences and try to make sense of these by looking for the most plausible explanation. The theory is called two-factor because we first experience the physiological reaction and then look for a reasonable cause to explain it. One now classic experiment was carried out to test this theory (Schacter & Singer, 1962). When the subject arrived for the experiment he was told he was participating in a study on the effect of a vitamin compound called Suproxin on vision. After the injection the subject was led to a waiting room to let the drug take effect. While there the subject was asked to fill out a survey containing some very insulting personal questions including one asking the subject about his mother’s extramarital affairs. Another participant present, an experimental collaborator, also read the questions and angrily tossed the survey on the floor and left the room.

In fact the real purpose of the experiment was not to study vision, but to understand people’s reaction to physiological arousal and the meaning attached. The participants were not given a vitamin compound but were injected with epinephrine, a hormone produced by the body that causes increased heart and breathing rates. How would you feel in a similar situation? You would have noticed the physiological change that occurred from the epinephrine. Your breathing rate would have increased and you would have felt aroused. Then the other participant reacts with anger at the survey. What is the most plausible explanation for the arousal that you feel? Since you have no information that you have been injected with epinephrine the most plausible explanation is found in the situational context of the survey and the other participant’s anger. In fact that is what happened, and the participants injected with epinephrine were much more angry than the participants given a placebo.

In an extension of this work the researchers demonstrated that emotions are somewhat arbitrarily defined depending on what is the most plausible explanation found in the situational context (Schacter & Singer, 1962). For example, the emotion of anger could be aborted by offering a non emotional explanation for the arousal. The researchers accomplished this by telling the participants that they could expect to feel aroused after being injected. When the subjects then began to feel aroused they inferred that it was the injection that caused the change and they did not react with anger. In yet another condition Schacter and Singer demonstrated that they could create a very different emotion by providing an alternate explanation for the arousal. In this condition the experimental collaborator acted as if euphoric and happy. The subjects began to feel the same way and inferred that they too were feeling happy and euphoric. In short Schacter and Singer showed that emotions are part of the self-perception process where people seek the most plausible reason for internal bodily changes.

6.2 Misattribution for arousal
Since we have no explicit standard to determine what causes our emotions we can misinterpret the cause (Savisky, Medvec, Charlton, & Golovich, 1998; Zillman, 1978). We know now that the same physiological arousal occurs in a variety of circumstances and to varying stimuli. In some situations there may be more than one source to which we can attribute the arousal. To what do we attribute the increased heartbeat, shallow breathing, and the rise in body temperature? If next to another person could the physiological changes be the effect of that person? What about if you are next to the other person during a parachute jump? Is it the fascination with the other person or is it that you are approaching the Earth at great speed that causes the increased heartbeat? There is no standard that will tell for certain, and the possibilities of misattribution of the cause exist in all such circumstances.

In the classical Dutton & Aron study (1974) the researchers demonstrated the ease by which misattribution of arousal can occur. The experimenters had an attractive young woman approach males with a survey purportedly for a project for her psychology class. When they completed the survey she explained that she would be happy to explain more about the project at a later time, and she wrote her phone number on a corner, tore it off and gave it to the participant. This procedure was followed under two independent experimental conditions. In the first condition the men were approached after they had crossed a rickety 450 feet high footbridge over a river in Canada. Most of us would after the crossing experience all the symptoms of the epinephrine injection found in the study of Schacter and Singer. Most people have hard wired brains preferring low and safe altitudes, and this bridge was very high and did not give the appearance of safety. As the men were approached immediately after crossing their hearts were still racing and they experienced physiological arousal. In the second condition the men were allowed to rest for a while after crossing, and had a chance to calm down somewhat, before the woman approached. They too were also given the phone number and the opportunity to call later for more information.

What would we predict would be the outcome from Schacter’s two-step theory? In the first condition the men had just experienced physiological arousal and were primed to find a plausible explanation. The most plausible cause for what they felt was the crossing of the bridge, but the beautiful woman made the stronger impression. Was the arousal due to the presence of the woman? In fact the results showed that significantly more men who were approached having just crossed the bridge called the woman subsequently to ask for a date, whereas few did if they were approached after resting. In other words the men misattributed the cause of their arousal from the true source, the crossing of the bridge, to the more powerful stimuli found in the lovely woman. Misattribution of arousal has also been found in other studies (Sinclair, Hoffman, Mark, Martin, & Pickering, 1994).

6.3 Cognitive appraisal theory: Emotion follows cognitive interpretation
Some researchers have noted that we sometimes experience emotion when there is no physiological arousal (Roseman & Smith, 2001; Russell & Barrrett, 1999; Scherer & Schorr, 2001). Cognitive appraisal theories explain that sometimes emotions follow cognition, after we determine the meaning of the event or situation. We appraise the event in terms of implications being good or bad, and what caused the event. A colleague is given a promotion, how do you interpret that event. If you live in a professional world of zero sum game behavior where someone’s promotion gives you less of a chance to advance, you may feel envy and later anger. However, if you are already at the top of the game and can advance no further you might feel happy. Suppose you have helped the colleague? Then perhaps you can attribute his or her success to your advice and assistance and feel pride (Tesser, 1988).

The main point is that in cognitive arousal theories the arousal comes after cognition, after attributing meaning and cause to the event. Arousal does not always precede emotion. Sometimes we feel the emotion, as we begin to fully understand the implications of what has happened and how the situation has changed. The two-step theory and cognitive appraisal theories complement each other as previous arousal is explained by the two-step theory, and interpretation followed by arousal explains emotion from the cognitive appraisal perspective.

7. Introspection: An unreliable source of self-knowledge
We can also learn about ourselves by “looking inside” and examining our own thoughts and feelings. You find yourself in an emergency situation when a man is drowning and immediately jump in the water to save him. Afterwards you think about the event, and come to the conclusion that the reaction was consistent with who you are, with your self-concept. Sometimes looking for inside knowledge can provide accurate responses, other times it can be misleading. You may think introspection is so obvious a source of self-knowledge that it is routine for most people. In fact we spend little time thinking about ourselves (Wilson, 2002). Even when we do introspect, the true reasons for behavior may not be part of the conscious process. In one study (Csiszentmihaly & Figurski, 1982) the participants wore a beeper that sounded off some 7 –9 times a day. Each time the beeper sounded the respondents were asked to record their thoughts and moods that were subsequently content analyzed. From all these responses the investigators determined that only 8 percent of all responses were about the self. Since life is about survival it is not surprising that much more thought was given to work, but nevertheless it suggests that the self is not a favorite object of contemplation.

Self-awareness theory contains the idea that people focus attention on the self in order to evaluate behavior in terms of meeting internal standards and values (Carver, 2003; Duval & Silvia, 2002). Only the psychopath would spend no time in being self-conscious and trying to objectively evaluate the self. Bundy, the serial killer spent the very last moments of his life trying to rationalize his behavior attributing his deeds to pornography. Of course the opposite is also true, some people have rigid moral systems and spend much time in self-accusation and self-blame. Most of us fall in-between, and from time to time become aware of discrepancies between behavior and moral beliefs. At times such self-awareness can be very unpleasant and motivate improvement and changes in life (Fejfar & Hoyle, 2002; Mor & Winquist, 2002). When self-awareness becomes too unpleasant we seek escape. Is that the reason so many people spend a good part of their lives watching television (Moskalenko & Heine, 2002)? The popularity of soaps could be understood as a way of solving personal problems by identifying with characters outside the self. Some escape is necessary in a stressful world. It becomes non adaptive when it substitutes for real answers to the person’s life and challenges.

At times escape takes the route of alcohol or drug abuse. When people drink to excess they can at least temporarily divert attention away from the self, although the day after may bring back unpleasant anxiety. The fact that so many people worldwide are involved in drug abuse is a testimony to how unpleasant self-awareness can be (Hull, Young, Jouriles, 1986). Religious devotion can also be a way to escape self-focus, and find forgiveness for not living up to moral standards. Like drug abuse, some religious focuses are self-destructive when the well-being of the self is totally ignored. What comes to mind are the suicide bombers who seek total escape to “paradise” in acts of self-destruction. At other times self-awareness can be pleasant. When you graduate from the university or professional school, or complete other significant achievements you may rightly feel enhanced in your self-awareness (Silvia & Abele, 2002). Sometimes self-awareness can help us avoid moral pitfalls when we are tempted to ignore some moral prompting. So self-awareness can serve both positive as well as aversive roles in human psychology.

One problem with introspection is that it may not tell us the real reasons for our feelings since these may lie outside our awareness. (Wilson, 2002). You find yourself instantly attracted to someone, how do you explain such feelings to yourself? Is it purely physical stimuli, or is it something else? Have you discussed important issues and found yourself in agreement, and you believe the attraction is based on similarity? People at times feel an instant chemistry (called that because we have no other explanation), but the real reason for our feelings escapes self-awareness. Introspection may not be able to access the causes of many feelings because we are simply unaware of the reasons. Most people will come up with plausible explanations, but these may in fact be untrue or incomplete.

Growing up in our societies we all have causal theories about feelings and behavior. For example many people believe that mood is affected by the amount of sleep, whereas mood is in fact independent of preceding sleep (Niedenthal & Kitayama, 1994; Wegner, 2002). Our legal system gives women custody of children based on the common belief that they are the best custodians. Yet we know that women also commit infanticide, and child abuse. Often causal theories are simplifications or simply not true, and we can make incorrect judgments about our behavior or actions. Sometimes influences that are under the screen of awareness are the deciding factor in behavior. In one study of clothing preference people evaluated clothing of identical quality. Whereas their causal theories might promote the idea that choice was based on quality, the investigators showed that it was the position of the clothing on the display table that mattered. The clothing that was placed farther to the right was preferred (Nisbett & Wilson, 1977). Most people would intuitively reject that idea, but it was the causal factor, perhaps dictated by brain hemispheric dominance. In all, this research shows that we should use caution in accepting causes derived from introspection about our behavior. We may come up with very plausible reasons, but they may be incorrect, and unimportant in the final analysis.

8. Organizational functions of the Social self
Self-knowledge takes on many forms including the beliefs we have of ourselves, our self-esteem, our memories, and especially in the west of what we think are distinctive attributes. Self-knowledge describes our social beliefs, our roles and obligations, and our relational beliefs that refer to our identity as part of families and community. Furthermore it describes our personal beliefs with respect to our traits, abilities and other attributes (Brewer & Gardner, 1996; Deaux, Reid, Mizrahi, & Ethier, 1995). Self-knowledge performs primarily a constricting and narrowing influence on perceptions. We construe the current situation with information from previous history thereby overlooking what might be novel. Information and experiences are made to fit our preconceived ideas about the self. In general information that can be integrated into what we already know about ourselves, our schemas, is more easily recalled. This self-reference effect has been demonstrated in several studies (Klein & Kihlstrom, 1986; Klein & Loftus, 1988). So self-knowledge not only shapes what we are likely to remember, but makes recall more efficient (Rogers, Kuiper, & Kirker, 1977).

8.1 Self-schemas: Structured cognitions about self-relevant concepts
What are the dimensions you use to think about important matters? Do you consider yourself an independent person? Do you want to do everything on your own rather than rely on assistance from parents or spouses? Are you hardnosed about immigrants in your country? Then you might think the country’s future depends on how global migration is solved. Self-schemas is defined as our organized thinking about important matters that are readily available in memory.

If peace as a concept was an important dimension you would have a storehouse of memories and beliefs readily available to comment on the ever-growing conflicts in the world. Some of the beliefs might explain the causes of conflict as for example derived from greed, intolerance, or the desire to control oil resources. One schema might define the solution to conflict is to treat everyone equitably. For each relevant issue your preexisting knowledge is organized for readily available responses. When we possess schemas it allows us to quickly identify and recognize situations that are schema relevant (Kendzierski & Whitaker, 1997). We judge other’s behavior and essence according to their similarity to our own personality. One study asked the respondents to rate themselves and twenty other people. The results showed that the dimensions the respondents found important in rating themselves were also employed in rating others. The execution of Saddam Hussein was a grim affair. However, you may have noted that he went to his death with great personal courage and dignity. If you value bravery in the face of annihilation your opinion of Saddam Hussein might have changed somewhat, independent of your evaluation of his policies as a political leader. We tend to use self-knowledge in an egocentric fashion when evaluating others. If scholarship is important to you, you may apply strict standards in judging the scholastic work and ability of others (Dunning & Cohen, 1992).

We cannot attend to everything in the environment. We selectively attend to those situations that are most relevant to the self. Self-schemas allow us to access information quickly and respond efficiently (Markus, 1977). Self-schemas also are restrictive and prevent information from being evaluated if it is seen as inconsistent with what we already believe.
Most people display self-image bias (Lewicki, 1983). Again culture may play a role. In the west the self-bias exists, because the self is construed independently. Asian students, on the other hand, are more likely to say they are similar to others rather than others are similar to them. Therefore in Asian self-construal, the other person becomes the standard for comparison. In one study on being the center of attention (Cohen & Gunz, 2002) the researchers showed that self-knowledge among Asian people use the perspective derived from others. In comparing Asian students with those who were native to Canada they found that Canadians were more likely to assess the situation from their own independent perspective, whereas Asians took the perspective of other persons in describing similar situations.

An important property of self-schemas is the sense of stability that they confer on the self-concept. The feeling that we have that we are essentially the same person over time, that the core of the self remains the same (Caspi & Roberts, 2001). For example children who are identified as shy as toddlers still remain shy at age 8 (Kagan, 1989), and have problems with social interaction later in life (Caspi, Elder, & Bem, 1988). Whatever we are in early life is likely to remain over time as we behave consistent and selectively to our self-schemas. Consistence is true for functional and alas also for maladaptive behavior. We are likely to remember information that is consistent with early self-schemas and disregard disconfirming events. As we review the past, self-schemas are employed to confirm our present self-concept and we resist thinking about discrepant or novel information (Ross, 1989).

8.2 Self-regulation
An important aspect of self-schemas is the concept of the possible self. Possible selves are our conceptions that propel us into the future in search of goals and achievements (Markus & Nurius, 1986). Some of us grow up thinking that we like a particular career. Envisioning ourselves as doctors, trade people, or mechanics leads us to the training required and sustains the motivation necessary to reach the goals. Those who have a vision of future possible selves work harder at accomplishing relevant tasks (Ruvolo & Markus, 1992). Self-schemas have obvious adaptive value. They not only allow us to quickly identify relevant situations and recall appropriate and effective behaviors from memory. They also guide our behavior as we think of what is possible in the future.

So the self serves regulatory functions determining people’s choices, and their plans for the future (Baumeister, & Vohs, 2003; Carver & Scheier, 1998). We appear to be the only species capable of long-term planning. Plans for our educational goals, or for family related matters like acquiring an ideal home, requires a self capable of self-regulation. In self-regulation a finite amount of energy is available. If we spend much self-regulative energy during the day we have less left over at night. Is that why couples have more arguments after a long hard day at work? (Baumeister, & Hetherington, 1996; Vohs & Hetherington, 2000). Research shows that dieters are more likely to fail at night when they are tired. Previous smokers are more likely to take up the habit again after experiencing adversity, bulimics are more likely to binge eat after a long day of self-control. With only so much energy available self-control has limits. We all need rest periods to develop the energy necessary to achieve health related goals.

Our self-regulation is determined to some extent by the culture in which we were socialized (Dhawan, Roseman, Naidu, Thapa, & Rettek, 1995). A study comparing Japanese with American college students demonstrated a cultural difference consistent with interdependent and independent societies. Typically American college students perceive of themselves in terms of personal traits. The independent self-construal emphasizes that which makes the person distinct. Self-regulation pertaining to personal achievement would rank high as an important trait in independent cultures. On the other hand Japanese students defined themselves much more in terms of social roles recognizing their relationship to family and society.

8.3 The stable versus the working self-concept
A stable concept is the sense of self-continuity from early memories to the present. However, some situations call for specific attributes that are part of a temporary working self-concept. The citizen soldier may have a stable self-concept that includes a working career and family life. However, when he goes to war the situation requires different attributes that become part of a working or temporary self. This working self-concept may involve a willingness to engage in violent behavior guiding action while in the war zone. Sometimes behavior in the war zone may permanently change a person, and the temporary self becomes part of the stable self. Many members of the Armed Forces returned from the war in Vietnam with permanent scars affecting their relationships and trust in other people in their civilian life. The temporary self guides what goes in a specific situation, but may itself become part of the stable self (Ehrlinger & Dunning, 2003).

In less traumatic circumstances the working self-concept may operate on the periphery of the self, and when the individual returns to normal circumstances the stable self takes over (Nezlek & Plesko, 2001). In one study (Crocker, Sommers, & Luhtanen, 2002) the investigators studied applicants to graduate school. The respondents were asked to complete self-esteem measures on days when they received acceptance or rejection notices from graduate school programs. For those respondents whose self-esteem depended a great deal on scholastic achievement acceptance to programs increased self-esteem significantly, whereas rejection decreased self-esteem. In one graduate program rejections and acceptances were noted on a comparative poster for all students applying for Ph.D. programs (KSL). A similar enhancement reaction occurred. Those who were accepted enhanced the self. Whose idea do you think it was? Probably those applicants who were very confident of acceptance and sought further evidence for self-enhancement in the eyes of fellow students!

9. Motivational properties of the self-concept
A major function of the self-concept is its relationship to motivation (Higgins, 1999; Sedikides & Showronski, 1993). What is it that causes us to make plans for the future? Our possible selves refer to our possibilities, what we can become or hope to be in the future (Cross & Markus, 1991; Markus & Nurius, 1986). The self-concept also includes social and cultural, and religious standards that we utilize in deciding on our behavior. Feelings of shame or guilt are associated with these aspects of the self (Higgins, 1987; 1999). We compare our actions not only to the actual self, who we believe we are, but also to the ideal self, what we should be including all our aspirations. The “ought” self also has motivating properties which refers to the duties and obligations we feel from family and society, and whether we behave appropriately. These various aspects of the self have proven to have motivational properties both in terms of cognition as well as behavior (Shah & Higgins, 1991).

9.1 Discrepancies and motivation
When we observe discrepancies between the actual self and what we think we ought to be we often experience fear or anxiety (Boldero & Francis, 2000). Loss of self-esteem might be defined as a discrepancy between real and actual compared to the ideal or ought selves. The greater the discrepancy the more dejected the person feels (Higgins & Bargh, 1987; Moretti & Higgins, 1990). These effects arrive from what Freud would call the superego, the early socialization that incorporates parental standards into the self-concept. The ideal self has a special influence when warm and accepting parents raise children. Children, on the other hand who have been raised by more rejecting parents think of behavior primarily in terms of meeting standards and avoiding rejection (Manian, Strauman, & Denney, 1998).

In recalling scenes of embarrassment Asians saw it through the eyes of other persons rather than from the perspective of personal feelings. (Chau, Leu, & Nisbett, 2005). People raised in independent cultures are more likely to look to the ideal self for guidance in regulating behavior, and be motivated to reduce discrepancies. People who are raised in interdependent environments pay more attention to the demands made by family and society as expressed by the “ought self” concept (Lee, Acker, & Gardner, 2000). The route to well-being is to regulate behavior to reduce or eliminate discrepancies between these aspects of the self and the goals they pursue in life (Bianco, Higgins, Klem, 2003).

9.2 Motivated by consistent and accurate selves
We all experience a sense of the self that is stable from childhood through the varying stages of life. Perhaps consistency in the self-concept is partially a cultural need as our rationalized society expects consistency in behavior to plan life-sustaining activities. Without consistency, a factory could not plan a work program, without a sense of continuity in traits and abilities the individual could not plan for the future, and society would be unable to educate. We need to believe that there is something within us that is consistent over time (Swann, 1983).

The motivating properties of self-consistency can be observed in a study by Swann and Read (1981). The participants were given feedback that was either consistent or inconsistent with their self-conceptions. Results showed that the students spent more time studying feedback consistent with the self-concept than inconsistent information. The need for self-affirmation can also be observed in our selective behavior. We tend to interact only with those who confirm our self-concepts. If we have a high estimation of our scholarly abilities we probably make friends with other students who also think we are good students and affirm our self-concept (Katz & Beach, 2000). We remember information better that confirms our self-concept (Story, 1998), and holds consistent self-beliefs as members of groups (Chen,Chen, & Shaw, 2004). This search for self-affirmation is modified by self-esteem. People who possess high self-esteem are willing to entertain both positive and negative self-affirming information. Those with low self-esteem want mainly positive self-affirming information whether accurate or not (Bernichon, Cook, & Brown. 2003).

Having an accurate self-concept has obvious adaptive value. To make plans for the future and experiencing success requires a fairly accurate self-concept including realistic assessments of our traits and abilities. Many of the tasks we choose are based on self-assessment of aptitudes. As discussed later all people are motivated by a desire to save face and impress others, so we are likely to pick objectives closely related to what we think we can do (Trope, 1983).

9.3 Our Self-worth: Motivated by the desire to elevate self-esteem
Culture also affects self-esteem. Those living in independent cultures experience primarily ego-based emotions. Accomplishments are a source of personal pride. Those who live in interdependent cultures experience satisfaction or frustrations based on their connectedness to others. (Mesquita, 2001). Parents and their children are for example, connected intimately in the children’s scholastic achievement. Self-esteem is likewise dependent on the interdependent form of self-construal. (Crocker, Luhtanen, Blaine, & Broadnax, 1994; Yik, Bond, Paulhus, 1998; Diener & Diener, 1995). Social approval is a primary motivator in interdependent cultures, and a better predictor of life satisfactions. In independent cultures life satisfaction is more a function of individual emotions (Suh, Diener, Oishi, & Triandis, 1998).

Our self-esteem is a major dimension of our self-concept. Self-esteem is a global evaluative assessment we make of our worth. Most psychologists employ simple surveys to assess self-esteem (e.g. Larsen, 1969). Those who have high self-esteem feel relatively good about their self-worth, those with low self-esteem feel some ambivalence, and a relatively few feel self-loathing. Trait self-esteem refers to consistent levels of self-esteem over time probably determined from early experiences with success or failure. Trait self-esteem is defined by self-conceptions of competence and efficacy in various areas of achievement. Trait self-esteem feelings remain consistent over time (Block & Robins, 1993).

We also experience momentary changes in self-esteem as a result of development or from the impact of significant events (Heatherton & Polivy, 1991). Male self-esteem tends to increase during adolescence, whereas female self-esteem falls during the same time (Block & Robins, 1993). At various times in our lives we may experience enhancing events that improve self-esteem. A large raise in salary or promotion at work may improve self-esteem. On the other hand we can also experience failure. If you find yourself competing against contemporaries with higher levels of ability the comparison may have negative consequences for your self-esteem (Brown, 1998; Marsh & Parker, 1984).

How comparisons are experienced depend on the relative centrality of the domain of achievement. Is the area of competition central to your self-worth or peripheral (Crocker & Park, 2003)? Professional achievement is central to many people’s sense of self-worth. If achievement is appreciated and work is progressing generally in the right direction, self-esteem will enhance; otherwise the blows of misfortunate will probably impact the self-esteem negatively (Crocker, Sommers, & Luhtanen (2002).

Central to a person’s self-esteem is the human need to be included. There is probably no more serious punishment in society than solitary confinement. Many prisoners can endure other forms of torture and denigration, but to accept isolation is very difficult. Some researchers assert that self-esteem is simply an index measuring relative inclusion-exclusion (Leary, Tambor, Terdal, & Downs, 1995). From an evolutionary perspective it is easy to understand the power of social approval. Those who obtain approval from significant others are more likely to survive and thrive. Approval seeking affects a variety of behaviors (Larsen, 1974a; Larsen, 1974b;Larsen, Martin, Ettinger, & Nelson, 1976; Larsen, 1976a). Those who feel excluded are likely to report low self-esteem. Even our changing feelings correspond to the approval by others (Baumeister, Twenge, & Nuss, 2002).

Self-esteem responds also to temporary conditions. Our moods change from time to time, and the reasons why are not always clear. Temporary mood swings affect self-esteem in either positive or negative directions (Brown, 1998). Even setbacks that have very little real meaning can temporarily reduce self-esteem. For example if your favorite athletic team loses an important game, self-esteem may decline (Hirt, Zillman, Erickson, & Kennedy, 1992).

As noted self-esteem is closely related to the domains we consider most relevant to our self-concept. Most people derive self-esteem from selected human activities. For some self-esteem is based on competence in scholarship or career. For others self-esteem is built on athletic prowess. Yet other people think that success in family and human relationships is of greatest significance. It is really a question of what we value in life. What domains are significant to you, and have you experienced success or failure?

Crocker & Wolfe (2001); and Crocker & Park (2003) have proposed a theory of self-esteem based on domains of self-worth. Self-esteem rises or falls with experiences of success or failure in key areas. Societies and cultures will vary as to what domains are considered important. Independence is a significant value in Western societies and is related to achievement of economic independence and reaching career goals. In interdependent Asian cultures the respect of others and maintenance of successful relationships may be more of a central value. Self-worth is to some degree selected by cultural emphasis and values. Regardless of culture it is important that we do not base self-worth on one or few domains since failure will be less salient if we have many domains of interest and achievement. Failure can be devastating for those who seek achievement in a single domain since they have no fallback position for self-worth.

9.4 Cultural boundaries of self-esteem and self-enhancement
The preoccupation with self-esteem is largely a Western phenomenon. It derives from our cultural values focusing on the individual and personal distinctions. It seems ironic that the rugged individualist valued in the West is vulnerable to feelings of low self-esteem. Westerners do self-report higher levels of self-esteem as compared with interdependent peoples (Dhawan, Roseman, Naidu, Thapa, & Rettek, 1995; Markus & Kitayama, 1991). That finding however, may be attributed to the greater modesty of interdependent peoples, and the greater preoccupation with the self in Western societies. A great deal of energy is spent in Western societies trying to enhance the self, and also supporting the impression management and face work of others to enhance their self-esteem. Americans and Canadians insist they have comparatively more positive qualities than others (Holmberg, Markus, Herzog, & Franks, 1997). The very nature of social interaction in the West, including but not limited to education, media effects, and socializing, encourages a preoccupation with self-esteem.

Being rewarded and praised for achievement is much more common in the West where people as noted seek distinctiveness, whereas in interdependent cultures people are motivated by common goals and self-improvement (Heine, 2005; Crocker & Park, 2004; Norenzayan & Heine, 2004). In Asian cultures self-criticism is common in the pursuit of social harmony and self-improvement. A student from the West who is invited to criticize himself may perceive that invitation as a threat to the self-concept and self-esteem. Cultural differences are rooted in either a preoccupation with self-esteem in the West, or self-improvement in interdependent societies.

Finally, we should keep in mind that cultural differences are abstractions. There are within societies more individual differences than can be found between cultures. Furthermore societies change over time. The individualism of Western societies is a product of recent centuries and the advancement of capitalist economies (Baumeister, 1987; Twenge, 2002). Each generation struggles with the issues related to adaptation, and in a broader sense values that lead to reproductive success. Globalization has produced values held in common by more and more people. In the new world order many countries accept the values of independence promoted in the West. Furthermore, there is evidence that many cultures are becoming more convergent in values and what is required for self-esteem (Heine & Lehman, 2003).

9.5 Preoccupation with self-enhancement
Since self-esteem in Western societies is largely based on independent egos and achievement based distinctions, most people are motivated to enhance self-esteem (Tesser, 1988). We like to see ourselves in the most favorable light possible given the constraints of reality. According to Tesser we accomplish this vicariously by reflection where we enhance ourselves by associating with those who have accomplished significant goals. The pride of parents in their children’s achievements is of this type, as is associating with those of social status. Much effort in Western societies goes into convincing others of our value by relating to those who possess status.

According to Tesser we also seek to enhance by social comparison. Social comparison can be used either upward for achievement or downward to enhance our self-esteem. Even in failure one can compare downward for self-enhancement. One is reminded of some countries where students noted a university degree in their vita followed by the word “failed”. Just the mere fact that a student entered a university program attributed higher status compared with those who never started!

On a more personal basis we select friends outside our most salient domains so we can always compare downward. Since these friends may perform well in other areas, the downward comparison can be in both directions. As a general rule we select friends we outperform in our salient domains, but who are talented in other areas. Self-esteem in competitive societies is based on this fundamental idea of ranking higher than someone else. In one study (Tesser, Campbell, & Smith, 1984) the researchers asked grade school children to identify their closest friends, their own most and least important domains or activities, and how good their friends were in these activities. As evidence of self-enhancing Tesser et al found that students rated their own performance as better in the salient areas, whereas they related their friends’ performance as better in areas less self-relevant (the reflection process). In other words the students overestimated their own performance in self-relevant areas, and overestimated their friends’ performance in other domains lending support to both social comparison and reflection processes.

Self-enhancement needs are important, and perhaps of overriding importance for most people (Sedikides, 1993). They are especially important when life has struck a blow in the important domain area. Being refused entrance to a favorite university may be very painful to the aspiring scholar. Threat or failure leads to self-enhancement efforts trying to shore up of self-esteem (Beauregard & Dunning, 1998; Krueger, 1998). Self-enhancement means that we evaluate ourselves more favorably than others (Suls, Lemos, & Stewart, 2002). Our efforts at enhancing self-esteem also affect the memory process. We remember the good and positive features about ourselves, and forget the negative (Sedikides & Green, 2000). We believe we are more altruistic than others (Epley & Dunning, 2000), we think we are happier than others, and less biased (Klar & Giladi, 1999; Pronin, Lin, & Ross, 2002).

There may be times when we acknowledge that we are less than perfect. However, in our efforts to maintain self-esteem we tend to think that the negative in our performance is less important than the positive (Campbell, 1986; Greve & Wentura, 2003). Not surprisingly we are less likely to falsely enhance when we can get caught in our little self-enhancing lies. If we are poor students we are less likely to boast to our professors about our previous achievements, if we are poor lovers our partners will eventually know. When the truth can not be hidden permanently we are more likely to be modest in our self-aggrandizement (Armor & Taylor, 1998).

9.6 Self-enhancement and stress
The exaggerated self-conceptions produced by self-enhancement can encourage better mental and physical health (Taylor, Kemeny, Reede, Bower, & Grunewald, 2000). That illusions can have positive consequences runs counter to many ideas in psychology. From the perspective of existential psychology self-enhancement is a form of defensive neuroticism, and distorts the real world. Since neurotic behavior is associated with continuous anxiety and stress, self-enhancement should be maladaptive. In one study (Taylor, Lerner, Sherman, Sage & McDowell, 2003) students were asked for their self-assessed personal traits like intelligence and physical attractiveness as compared to their peers. Participants who self-rated higher than their ratings of peers were considered self-enhancing. Later the participants performed tasks designed to create stress as manifested by higher heart rates and blood pressures measures. The results showed that the self-enhancing group had lower heart rates and blood pressure responses, and recovered to normal measurements more quickly. Self-enhancers also had lower cortisol levels than did the comparative group of non-enhancers. In short the self-enhancers had healthier responses, tended to be more optimistic, had feelings of personal control, and a supportive social group that all contributed to the lower cortisol levels. These experimental results support the contention that self-enhancement leads to healthier physiological and endocrine functions.

9.7 Threat and self-enhancement
When people are confronted with threats to self-worth they typically shore up self-worth by reaffirming in other unrelated attributes of the self (Steele, 1988; Aronson, Blanton, & Cooper, 1995; Koole, Smeets, van Knippenberg, & Dijksterhuis, 1999). Self-affirmation theory applies only to those respondents who have high self-esteem. In one study students high and low in self-esteem were led to believe they had either failed or succeeded on a test of intellectual ability. Respondents who were high in self-esteem, but who had been led to believe they had failed, exaggerated their positive social qualities. Respondents with low self-esteem generalized their failure experience as one already consistent with what they believed about themselves. Since those with high self-esteem believe they have many other positive traits they immediately seek to reaffirm their strengths in an unrelated area after perceived threat (Dodgson & Wood, 1998). The healthy nature of self-affirmation can be observed by the fact that the respondents feel good about themselves in the aftermath, and are strong enough to entertain potential negative information about the self. (Sherman, Nelson, & Steele, 2000).

There is no greater threat than that of personal annihilation. Terror management theory asserts that the threat of death leads people to seek ways to minimize or manage this vulnerability (Greenberg, Porteus, Simon, Pyszczynski, & Solomon, 1995). The threat of personal annihilation is kept in control by two mechanisms. First of all self-esteem helps the individual feel a valued person in a meaningful universe and this controls to some degree the threat of death. In the face of imminent death people have a need to reaffirm the importance of their lives, and the legacy they have created including assessments of meaningful work, and personal relationships.

Secondly, in a world-view that provides hope for the future, or at least makes some sense of the present assists in controlling threats to mortality. Conformity to cultural expectations and values is another means by which people control fear (Greenberg, Lieberman, Solomon, Greenberg, Arndt, & Simon, 1992). The familiar is soothing and allows the individual to see continuity even when personal existence is ending. At the same time when confronted with the fear of death, people also seek affiliation (Wisman & Koole, 2003). We can observe that need in the increasing popularity of the hospice movement. From anecdotal experiences (KSL) death threat is lowered when the patient is under the care of hospice, and the individual feels less lonely or isolated through the efforts of volunteers accompanying the patient on the last journey.

When people are scared by threats to mortality they are also more likely to act with aggression toward those who challenge their world-view (McGregor et al, 1998). Hostile reactions can be observed in the anger displayed by people who are related to soldiers serving the US army in Iraq or other theaters. The slogan “support the troops”, flag waving, and shrill denunciations of war opponents, emerge most likely from the perceived threat to mortality to the loved one. Nations mobilizing for war have known how to manipulate the threat of mortality in order to energize the war effort, and demonize the enemy. That story continues throughout the world today.

9.8 Group membership and false self-esteem
The German people after the First World War were a morally defeated people, on the battlefield, and in estimation of the international community. The great depression that followed created economic insecurity and a loss of faith in contemporary society. It was a perfect time for the great manipulators of history to gain power by appeals to false self-esteem and false pride. The Nazi’s sought to restore false self-esteem by use of in-group symbols and by being willing to find scapegoats for social frustrations. Although the Nazi’s appearance on the stage of history was extreme in destruction and victimization, fundamentally they were no different than any other genocidal group. The genocide in Rwanda and Darfur were caused by similar in-group identification and the demonization of adversaries. The concentration camp that the Palestinian people have lived in the past half a century is motivated by the similar fears that caused the victimization of the Jewish people by the Nazi’s. We seem to have learned nothing from history and so repeat the crimes derived from in-group based false self-esteem.

In contemporary society the phenomenon of gang violence takes a similar path. Gang members typically come from poor and deprived environments ripe and ready for exploitation by misleaders. Typically gang membership is compensation for all that is missing in a young person’s life. As a result self-esteem is derived from gang pride emphasized by the use of symbols and colors. The Bloods (red color) and the Crips (blue color) are common criminal gangs in the US. Typically gang members display an elevated sense of self-worth and grandiosity not supported by achievements or good works (Wink, 1991). The fact that gang members possess false self-esteem can be observed in their sensitivity to any perceived insult or denigration. Children are shot dead in the streets of the US for imagined insults to the colors of another gang, revealing the fundamental insecurity underlying gang enhancement.

In fact psychopaths possess the same grandiose sense of self-worth (Hare, 1993) and are responsible for a majority of violent crimes. Psychopathic criminals also have inflated views of self-worth combined with hypersensitivity to perceived threats or denigration. The murders and bullies emerging out of gang culture have no genuine self-esteem, but rather are narcissistic and arrogant individuals. Is it a coincidence that members of the White prison gang “Aryan brotherhood” use Nazi symbols? This false sense of self-esteem is historically responsible for genocidal deeds whether slavery, modern forms of terrorism, or other forms of violent behavior (Baumeister, Smart, & Boden, 1996). In fact all gangs of history, from those led by Hitler to the military fascists led by Pinochet, have in common grandiose feelings of superiority and arrogance and a deficit in real genuine self-esteem.

10. A sense of well-being: How do we reach that blessed state?
In traveling to other countries one can often observe the apparent sense of well-being expressed by people poor in material possessions. Yet in our modern world we are taught that consumption is the road to happiness, and having money to consume produces life satisfaction. However, even in modern capitalist societies money makes little difference to a sense of well-being (Diener, Suh, Lucas, & Smith, 1999). People adjust to whatever the economic and social circumstances that are present within some degree of latitude. Of course, if people live with deprivation from poverty in the form of hunger or untreated health issues, well-being is impacted. Well-being is related to the quality of our life experiences (van Boven & Gilovich, 2003). The here and now is important to the enjoyment of life. Many people delay living to some point in the inaccessible future. They perpetually look for the joy of weekend, the vacation, the retirement, and eventually a place in heaven, but fail to enjoy the journey itself.

Realistic expectations play an important role in well-being. If expectations are too high, or if you do not have the resources necessary, frustration may follow. Being able to withdraw from unrealistic goals and move in a different direction is related to satisfaction (Wrosch, Scheier, Miller, Schulz, & Carver, 2003). A sense of well-being probably is a consequence of the person you are. Some people see a glass half empty; others see the wine bottle next to the glass is still nearly full. We can focus on aspects of life that are going well for us, or we can concentrate on reliving all our failure. Important to well-being is the pursuit of goals that reflect who we are, and which are consistent with basic human values.

Those who live in poverty in third world countries may never have the same degree of freedom that we possess, but that in and of itself does not prevent a meaningful life. Regardless of where we live in the World we all have basic needs for self-directed lives, for autonomy, for establishing competence in mastering the social environment, and having supportive social network (Kang, Shaver, Sue, Min, & Jing, 2003). Being optimistic obviously matters, and maintaining positive emotions over time is associated with a greater sense of well-being (Updegraff, Gable, & Taylor, 2004).

10.1 The route to well-being: Complexity of attributes and self-efficacy
Central attributes have a significant affect on the sense of well-being. Some of us put all our achievement eggs into one or few baskets. For students whose self-esteem is bound up with academic performance and little else, a low grade may be devastating. Others look to achievements in a number of areas to sustain positive feelings about the self. Students can also have hobbies, special talents, a wide-ranging mind, may participate in athletics, and much more. As noted for respondents with complex self-concepts setbacks in any one area produce less vulnerability since they have other achievements to sustain positive feelings. On the other hand respondents with simple self-concepts are vulnerable when experiencing setbacks, as they have nothing else to sustain their self-concept (Linville, 1985). People with simple self-conceptions may feel good when successful, but are likely to be depressed in cases of failure (Showers & Ryff, 1996). Self-complexity produces a buffer against the inevitable setbacks and adversity of life. That is true for those holding complex positive self-concepts. Those with negative self-views are not going to feel better by having more complex negative self-concepts, since that just provides more reasons to stay depressed.

Having feelings of self-efficacy also creates a sense of well-being. The lack of self-efficacy is probably the reason that most dieters fail to stay with the program. Many people have little confidence that they can achieve the weight loss they want, and they then behave appropriate to these expectations of failure. Others have had experiences of success upon which to build self-efficacy. This is the time of year when one of the authors goes on an annual diet called the “ keep your mouth shut diet”. Based on past success experiences there is confidence that this approach will work again and bring down weight to a more optimal level. There is no doubt that this success story will be repeated.

Self-efficacy probably grows out of early experiences with parents and teachers. Early success leads to stable self-conceptions of efficacy in a variety of areas. Self-efficacy produces a sense of personal control giving encouragement to a person’s planning for the future. Feelings of self-efficacy also help in coping with possible setbacks by self-regulating and changing behavior (Pham, Taylor, & Seeman, 2001).

Self-efficacy reduces the stress of life and produces more optimism about the future. In the long run self-efficacy produces basic approach or avoidance orientations to life. Some develop a behavioral activation system based on positive happenings of the past. Others with negative experiences develop an inhibition system that prevents the individual from undertaking important challenges for lack of confidence (Gable, Reis, & Elliott, 2000). Some think of these basic approaches as stable personality traits. For example, extraversion is a behavioral activation based on social intelligence and success. On the other hand neuroticism is an extreme example of avoidance (Carver, Sutton, & Sceier, 2000).

10.2 Positive illusions: Another road to well-being
Self-knowledge can affect our well-being. We need realistic self-conceptions to make good decisions and be successful. However, positive illusions about the self can be enhancing, and encourage and motivate behavior (Taylor & Brown, 1988; 1994). Many psychologists in humanistic and existential psychology (including Carl Rogers and Abraham Maslow) have encouraged us to accept life as it is and believe that self-illusions are fundamental in neurotic behavior.

Contrary to existential views it appears that unrealistic positive self-concepts are in fact related to well-being. Most people think that positive traits describe them better than negative dimensions. In accepting negative self-descriptions we dilute the effect on the self-concept by asserting that we share these negative attributes with many others. We reason that the flaws we possess are not important since we share them with many people, whereas our positive traits are distinctive.

Those who are well adjusted tend to have an exaggerated sense of control over their lives. People often think that ritual will affect the outcome of life. On game shows one can hear the player “command” the game to perform in the winning direction when it in fact the outcome is based on randomness. In a study on lottery tickets (Langer, 1975) the experimenter tried to buy back lottery tickets which all had the exact same probability of yielding a winning result. Those buyers who had chosen their lottery ticket based on some superstition, held out for a larger return when asked to sell the ticket prior to the drawing. On the other hand depressed people are more accurate in their appraisals of control, but are of course less happy (Abramson, Metalsky, & Alloy, 1989).

Self-enhancing perceptions are adaptive (Taylor, Lerner, Sherman, Sage, & McDowell, 2003). Even if our optimism is not justified we feel better about the future based on positive illusions. Positive illusions give us feelings of control where in fact we have none. Believing in the heaven to come may be a positive illusion that nevertheless helps the believer cope with randomness and absurdity. Should we encourage people to have positive beliefs even if they are illusionary? Some research has supported the idea that optimism and false sense of control may help people feel better about themselves and feel happier (Regan, Snyder, & Kassin, 1995). Do we need a new psychology based on positive illusions since at least in some areas they are adaptive and not neurotic?

When we feel good about ourselves it has positive consequences for our social relationships. You must have noted that when you feel good about life you are more open and agreeable. Positive self-regard fosters relationships, within some limits (Taylor et al, 2003). However, people will get tired of the self-promoter, and self-aggrandizement can also lead to alienation. As in the cases of most other behavior, self-enhancement is an issue of balance. Have you ever met perpetually happy people so self-enhancing that you shake your head and tell yourself “that can’t be for real”?

People living in the West are likely to have unrealistic optimism about the future (Aspinwall & Brunhart, 1996; Kitayama, Markus, Matsumoto, & Norasakkunkit, 1997; Seligman, 1991). The optimism is personalized since they believe positive events will happen to them, but not necessarily to others. Unrealistic optimism emerges out of people’s egocentrism, where most people focus on their own outcomes and ignore happenings to others (Kruger & Burros, 2004).

In any event, having unrealistically positive self-perceptions lead to exaggerated sense of control and unrealistic optimism. Overall these illusions improve well-being by creating positive moods, healthier social relationships, and by promoting goal directed behavior. Few of us would start any journey, even an easy one, if we did not believe the outcome would be positive. In struggling against tyranny like in Burma where the state holds all the power, few people would work for reform or change unless they had the positive illusions that in the near future or historically their efforts would be crowned with success.

The ego-centrism can go too far (Colvin & Block, 1994). The narcissist typically endorses extreme self-enhancement illusions. However, self-promotion turns off most people in the long run. Narcissists have the tendency to blow their own horn too long and people reject such behavior (Paulhus, 1998). Longitudinal studies have shown a further downside of positive illusions. Students who exaggerate their academic abilities eventually come up against reality and experience failure at school and loss of self-esteem (Robins, & Beer, 2001; Colvin, Block, & Funder, 1995). So not all forms of positive illusions serve the function of well-being. It would appear that we need some positive illusions to become motivated to reach goals, but not so illusionary that we experience constant failure. A balance must be created between the positive illusions and accurate self-concepts.

10.3 Culture and positive illusions
Cultures show significant differences in the endorsement of positive illusions. Westerners are more likely to endorse these when compared to Asian peoples (Heine, Lehman, Markus, & Kitayama, 1999; Kitayama, Markus, Matsumoto, & Norasakkunkit, 1997). In considering academic abilities Japanese hold fewer positive illusions compared to Western students, and display less unrealistic optimism when compared to Canadian students (Heine & Lehman, 1995; Heine, Kitayama, Lehman, Takata, Ide, Leung, Matsumoto, 2002). In a study of 42 nations Sastry and Ross (1998) found that Asians were less likely to feel they had complete control over their lives, whereas people from Western societies displayed unrealistic optimism.

So from a cultural perspective we must conclude that positive self-delusions do not automatically lead to well-being. In independent societies well-being is a construct closely tied to positive views of self, control, and optimism. In Asian societies well-being is tied more to interdependent self-conceptions. The fulfillment of social roles and expectations is fundamental to self-construal in Asia, and satisfaction in these areas is more likely to bring a sense of well-being (Suh, Diener, Oishi, & Triandis, 1998).

11. Impression management: We are actors on the stage of life
Have you noticed that your behavior changes depending on the person with whom you converse and the objectives of the interaction? With your parents you act with a measure of love and social obligation, with teachers you are courteous trying to produce a favorable impression, with a baby you are natural and feel no need to impress. These varying responses can also be called situational conformity. Before interaction we have an awareness of the person, the situation and the objectives. We mold our behavior to make a correct and useful impression, especially on those who have status and power. The psychopath is perhaps the most skillful in impression management. How did Bundy, the serial killer, create enough trust in young women, so they accompanied him to his car where they were overpowered. He did it by putting his arm in a sling, and looking helpless he appealed for help from sympathetic coeds.

In a broader way we want to be accepted by others (Baumeister & Leary, 1995). As noted there is psychologically nothing more painful than social exclusion. Some societies use that knowledge to torture prisoners whether at Guantanamo in Cuba, or in special penitentiaries in the US, where prisoners sit in a cage like cells for 23 hours a day with no social interaction. We can think of the death penalty as the ultimate form of social exclusion and torture that on the face is both cruel and rather unusual. As noted earlier in this chapter social exclusion is related to self-esteem. Researchers have also demonstrated that social exclusion is among the most painful and stressful conditions known to humanity (Eisenberger, Lieberman, & Williams, 2003; Twenge, Cantanese, & Baumeister, 2003). We self-monitor so that our behavior is acceptable and we will be included.

We can see by these examples that there is a significant difference between people’s public and private selves. Much that we have discussed in this chapter pertains to the private self, the executive “I” as decision maker or regulator of behavior and how it is influenced by the social context. We operate in a social context of no small importance, and learn early that others have power to make life better or worse. The public self is devoted to impression management, where we try to convey an image and convince others that this image is our true self. We work hard to get other people to see us the way we want to be seen (Goffman, 1959; Knowles & Sibicky, 1990; Spencer, Fein, Zanna, & Olson, 2003).

We are actors on the stage of life concerned with self-presentation and the monitoring of our behavior. Impression management is about convincing others to believe in the “face” we are presenting. We try to control what others think of us because doing so has utility in terms of material, relational, and self-relevant advantages. Goffman was probably the first to systematically examine how we construct our identities in public. He maintained that much of our public behavior is governed by claims we make in an effort to maintain a positive face. The image we want to convey Goffman calls face (see also Baumeister, 1982; Brown, 1998; Leary & Kowalski, 1990).

Impression management follows a certain script we have memorized to be used whenever we interact with others. We also expect others to play their roles and to respect the identity we convey. This is a mutual support society since other people depend on us to honor the claims they make. To lose face is very painful, and in Asian cultures can be unbearable. We want other people to respect, not the private self, but the one we present to the world. We are all actors trying to be convincing to our audience.

11.1 Ingratiation
In the process of impression management we can employ several strategies (Jones & Pittman, 1982). The term “brownnosing” is used to describe those who try to ingratiate themselves to gain advantage with powerful others. Ingratiation is a frequently used strategy to make ourselves more likeable with the powerful (Gordon, 1996; Vonk, 2002). Nothing is more effective than sincerely meant praise in promoting liking relationships. On the other hand if the praise is for ulterior motives, and most of us can feel that, the ingratiation may backfire (Kauffman & Steiner, 1968).

11.2 Self-handicapping
Another strategy to protect face is self-handicapping. Our face is so important that we often engage in self-defeating behaviors to avoid losing face. In self-handicapping we set up excuses prior to any performance, so if we do poorly we have an excuse that exonerates the public self (Arkin & Oleson, 1998; Thill & Curry, 2000). Students may self-handicap prior to an important exam. Spending the night drinking with friends provides the alibi for poor test performance, and therefore does not reflect on the image created among fellow students. In one study (Berglas & Jones, 1978) students were offered a chance to either take a performance enhancing drug, or one that would impair test taking. The respondents were placed in one of two conditions. One group was led to believe that they were going to succeed on the test, the other group were led to believe that failure was likely. The participants who thought failure was likely preferred the performance-inhibiting drug even though that would result in poor test performance. From the point of view of self-handicapping, students would rather fail, but have a good alibi for failure, than take the chance for success, but have no excuse if they failed.

Self-handicapping can have serious consequences for health. Condoms have proven an effective preventive of pregnancy and sexually transmitted diseases, yet from 30 to 65 percent of respondents reported that they were embarrassed when buying these health-promoting devices. Somehow buying condoms violates many people’s self-presentations as perhaps non-sexual or at least not promiscuous. In this day of increasing skin cancer many continue to sunbathe to excess to meet a self-presentation of beauty and ironically of health. Social approval continues as a basic motivation for impression management (Leary & Jones, 1993).

Some self-handicapping is not so obvious. We may simply prepare within ourselves ready-made excuses for poor performance. We know the material, in fact we feel that we are experts, but we attribute poor performance on tests as due to test anxiety, headaches or being in a bad mood on the day of performance. In the process of self-handicapping we may become self-fulfilling prophecies and come to believe in our excuses. Self-handicappers may become permanent poor performers and fail to establish the parameters for a successful life. It is ironic that the concern underlying self-handicapping, i.e., to be liked for the face being conveyed, may in fact have opposite results. Most people see through the charade and do not like those who spend their efforts at self-handicapping rather than working (Hirt, McCrea, & Boris, 2003).

11.3 Self-promotion
Impression management is all about making a “good” impression (Schlenker, 1980). Some people use the direct route and self-promote, never tiring in telling others of their many and varied accomplishments. The self-promoter is primarily interested in other people’s perceptions of their competence (Jones & Pittman, 1982). Self-promotion depends on the norms of social interaction. In athletic competition a norm of modesty prevails. Therefore it is not in good form to boast of one’s own performance, but rather attribute success to the efforts of teammates, coaches, and fans. Normative modesty works best when it is false, and the athlete has cause to boast. Then modesty is a strategy of positive impression management (Cialdini & De Nicholas, 1989).

Other forms of self-promotion are vicarious. We like to enjoy “the reflected glory of others”. By associating with successful others we obtain positive associations (Cialdini & De Nicholas, 1989). Oregon State University had a terrible record in football across many decades. During that time few fans attended the games or wore clothing identifying with the team. That all changed when a new coach created a team with a wining record. Now thousands of cars approach the city on game day, with banners, and team symbols. Vicarious self-promotion contributes to positive impressions associated with winning and status, at least in the western world.

11.4 Private versus public self-consciousness
The aforementioned discussion supports the difference between a public self (known to others) and a private self (known only to the self), (Fenigstein, Sceier, & Buss, 1975). Being publicly self-conscious encourages people to engage in face saving and impression management. The ironic aspect about public self-consciousness is that nearly everyone is conscious of his or her audience and painfully aware that others are observing. However, since everyone is focused on the affect of the audience there is really little time left over to actually observe others. A lot of face saving and impression management efforts are wasted because while we are aware of others the focus is on the effect internally. There are individual differences. Those with fragile egos are overly concerned about what others might think about them (again a wasted effort). Insecure people tend to think of themselves in terms of social popularity and approval (Fenigstein, 1984). In public self-consciousness awareness is directed toward what others think, however since everyone shares that attribute, the focus is internally on the effects of the audience and people really do not observe others. Then why be publicly self-conscious?

Some people have private self-consciousness and a greater awareness of internal feelings and thoughts. Those with a private self tend to think of themselves more in terms of their own independent thoughts and feelings. Those with private self-consciousness care little about what others think, but are a rare breed. Due to the long dependency period of humans beings, and the nature of the social self formed by social interactions, private self-consciousness is not only rare, but probably also affected by what others think.

Since we want to be accepted we spend energy and time on self-monitoring (Gangestad & Snyder, 2000). Most people want to be socially acceptable and therefore monitor behavior to see if they fit the requirements of the situation. People high in self-monitoring are the true actors on the stage of life. They are situational conformist, switching behavior as required from one situation to the next. Low monitors are more likely to respond to internal impulses or demands, and are less dependent on the social context. Is monitoring adaptive? In one study (Snyder, 1974) patients in a mental hospital scored low on self-monitoring. That finding suggests that to cope effectively with life requires at least some awareness of surroundings and the social demands for appropriate behavior.

11.5 Cultural differences in impression management
In all cultures the social self emerges from social interactions and is formed by the socialization of varying social values. The fundamental difference in cultural values as noted previously is the predominant emphasis on independence in Western cultures, and interdependence in Asian and some other developing societies. The term “saving face” has been associated with Asian cultures and reflects a special sensitivity in maintaining face in these societies. To lose face is to lose identity for interdependent people. Appearance is of great importance. For example, if it is important to have many wedding guests, and if one has an insufficient number of friends attending, one can rent guests (Jordan & Sullivan, 1995). If there are insufficient lamenters at a funeral one can hire professional lamenters to produce appropriate grief display.

In Asian cultures, impression management concerns the measuring up to social roles and expectations whereas in the West there is a greater desire for individual enhancement (Heine & Renshaw, 2002; Sedikides, Gaertner, & Toguchi, 2003). In fact self-enhancement is ubiquitous in all Western societies while relatively uncommon in interdependent cultures. The various terms discussed in this chapter like self-consciousness and self-regulation take different forms depending on culture (Simon, Pantaleo, & Mummedy, 1995). Yet these cultural differences must be taken with a grain of salt. Culture may account for small amounts of the behavioral variance, and societies are changing as the world is becoming more convergent. At the same time if we want to improve intercultural communications we must have some awareness of cultural values.

Summary
This chapter discusses several dimensions of the social self, self-knowledge and self-esteem. Self-awareness starts at an early age, perhaps as early as nine months, and certainly by age two the child recognizes the self as distinct. Over time we accumulate knowledge about the self from experiences with family, school, and culture. As our interactions become more complex, a belief system about the self emerges, and along with that an understanding of our more complex attributes. Self-esteem is our judgment of personal morality, and the satisfaction with our performance relative to ideal and ought selves. People who are low in self-esteem need constant approval and reaffirmation. High self-esteem is functional in setting goals and persisting in our goal directed behaviors. Those with low self-esteem are more pessimistic and do not believe they have self-efficacy.

The building blocks of the self point to five basic traits as being universal: namely conscientiousness, extraversion, agreeableness, and neuroticism. The research literature supports the heritability of personality traits. We use these traits in judging others and ourselves. Since the traits are understood everywhere they must a biological evolutionary basis growing out of needs to adapt and survive. The heritability of traits is supported by studies of fraternal and identical twins. Also, traits identified early in children, like shyness, tend to have lifelong consequences. Neuroticism is associated with subjective stress, and on the opposite side extraversion is associated with the presence of the neurotransmitter Dopamine. It is impossible to separate the self from biological inheritance. Recent research points to the complex interaction between genetic inheritance and specific environments in producing predictable behavior. Perhaps some traits like neuroticism were adaptable in early human history in the struggle for survival, but are non-adaptable now in our complex society.

Scientists and philosophers have long discussed the nature of the self. As science has progressed we understand more and more the so-called “easy” problem that links thought to brain function. The “hard” problem is trying to understand the “knower” the subjective experience that someone is in charge, an executive “I” or decider. Why does it feel like we have a conscious process, and how does that subjective experience emerge from neural computations in the brain? When scientists use MRI’s they can practically map thought processes in the brain, but there is no convincing evidence of an ethereal soul. Is the “knower” nothing but an illusion required by the information overload in the brain, and the need to evaluate stimuli? Can the knower be understood solely as brain activity? Certainly believing in a soul construct has not supported moral behavior as is evidenced by all human history. The hard problem remains and may never be solved. All we can say with certainty is that the whole is greater than the sum of its parts.

The development of the social self is produced by the consistent reactions of socialization agents. These reactions influence the development of self-knowledge and self-esteem. It is the consistent treatment by early socialization agents such as family that is the basis of what we believe about ourselves and that knowledge guides our behavior for the rest of our lives. The family is central in the creation of the possible self, the self of the future. Other factors that influence the development of self-knowledge and self-esteem are birth order and group memberships. Birth order has an effect as children learn to occupy various niches in the family that are functional and rewarding. Group memberships are also a key to understanding the self because groups socialize values that have motivational significance. Research has shown that even nonsensical groups may have profound effects on decisions and history shows that group categorization itself is responsible for much of the mayhem in the world. Minorities for example have to deal with special challenges as they cope with mainstream cultures. Although in general, strong ethnic identity combined with positive attitudes toward the larger society is associated with high self-esteem.

Culture is a major source of the self-concept. The main differences discussed in this chapter and in what follows are the reliable differences found between interdependent and independent societies introduced in chapter 1. For the interdependent societies of Asia and elsewhere, the social context of family and society matters greatly in the development of the self-concept. The independent societies of North American and Europe have more independent self-construal where the self is seen as autonomous, distinct, and separate from others. Whether we achieve for personal reasons or for group goals is to some extent determined by culture. One’s culture might also affect the choice of career; and whether we seek to enhance the self or society. In independent societies self-esteem is ego based, whereas in interdependent cultures it is more related to family and social approval. As always we must remember that cultural differences are abstractions, that people differ within cultural models, and that the world is becoming more convergent.

Gender plays, along with family, groups and culture, a vital role in development of the self-concept. All cultures treat males and females differentially with lifelong consequences. Women become more interdependent and connected to intimate relationships. Men are more affected by larger social groupings. Socialization through the efforts of families, society, and educational processes produce these predictable differences. Gender differences probably evolved early in human history in response to survival demands that required role specialization. A few theories have been discussed in this chapter.

Social comparison theory asserts that we learn about ourselves by comparing our behavior to that of others. We enhance ourselves when we compare downward, and inspire ourselves for achievement when comparing ourselves to high achieving models. At times, e.g. when facing a crisis or in response to uncertainty, we compare in order to bond with other people.

Self-perception theory suggests that we derive the meaning of emotions from self-observation of our own behavior. At times we meet with novel situations or the unfamiliar and do not know what we are supposed to feel. In these cases our objective behavior becomes the guide for understanding our emotions. We attribute meaning by ascribing the cause for our feelings to either the situation or to personal volition. Self-perception theory has been applied to education, and supports the importance of intrinsic motivation in producing lasting learning. Schacter used self-perception theory in his two-factor model of emotion. He states that people note their internal physiological reactions to stimuli and then look in the environment for a plausible cause to explain these feelings. This has been demonstrated in research that showed that emotional labels may be arbitrary and can be manipulated For example, happiness or anger can be attributed from the same physiological reactions depending on environmental factors. Misattribution of arousal is possible as more than one source can explain what we feel. Research shows that misattribution for arousal can also easily be manipulated. In relation to this cognitive appraisal theories point out that sometimes we experience emotions after we think about and understand the situation. The meaning of the situation, the good or bad it implies for our well-being brings on emotions after we have thought about these consequences.

We can also learn about the self-concept by introspection although introspection is not reliable. Most people spend little time thinking about themselves because it is, at times painful, especially if we are aware of shortcomings in meeting ideal or ought selves. We seek escape in drugs, excessive television viewing, or dogmatic religion that tells us all we need to know. Also, introspection may not tell us the real reasons for our feelings as we may rely on causal theories derived from society that offer plausible but false causes.

A major organizational function of the self is the constricting and narrowing of our perceptions. Research shows that the self affects memory, as recall of material is more efficient if related to self-relevant schemas. Self-schemas refer to the basic dimensions we employ in cognizing about the self, it is our organized thinking about important self-relevant dimensions. Self-schemas are readily available in memory, and are a fundamental organizing tool. We develop self-schemas because we cannot attend to everything, and therefore focus selectively on information considered most relevant. At the same time self-schemas restrict information by removing from awareness information that is inconsistent from that which we already believe. Self-schemas are stable over time, precisely because we act consistently and selectively to new information.

A major function of self-schemas is self-regulation. We think about the future and envision a possible self, what we can become, and this motivates our planning and behavior. The self serves regulatory functions in determining plans and choices for creating the future that we expect and want. It is important to keep in mind that energy for self-regulation is finite. This fact makes us vulnerable when trying to stay on diets or refrain from taking up bad habits once discarded. The stable self provides a sense of continuity throughout the lifespan. At times we are faced with novel situations like soldiers in wartime, and develop working temporary selves to cope with demands. Sadly, these temporary working self-concepts can become part of the permanent self when the behavior varies widely from the stable self, and the situation is traumatic and powerful in its effects.

The self has motivational properties. Our current behavior is determined by our plans for the future and our possible selves. Possible selves also include religious and cultural standards, and are often associated with feelings of guilt and shame. The ideal self refers to our aspirations in life, whereas our ought self describes our obligations and duties. Discrepancies between ideal and ought and what is real causes anxiety, and produces for some the motivation necessary to change. Most alcoholics feel the discrepancy eventually, and many seek help.

In judging others we use our self-image bias. Whether we accept others is related to how similar others are to ourselves. Culture plays here a role as well. For example in the West others are judged according to criteria of the independent self where the ideal self plays a primary role. In interdependent cultures others become standards for judgment, and the ought self including obligations and duties is the primary evaluative tool.

We are motivated by consistent and accurate self-conceptions. Especially feedback that is consistent with our self-conceptions is motivating. We seek primarily self-affirmation in our interactions with others and this in fact influences our choice of friends. We select those friends who will confirm our self-concepts. This selection is to some degree modified by self-esteem: Persons with high self-esteem are more likely to be receptive to both negative and positive self-confirming information than persons with low self-esteem. An accurate self-concept is adaptive since plans and success in the future depend on accurate self-assessments.

Most people are motivated to enhance a sense of self-worth. There are components of self-esteem that remain consistent as a personality trait throughout life. Momentary changes in self-esteem, however, may occur from developmental issues and as a consequence of significant events. A central issue in the need for self-esteem is the desire to be accepted and included. Isolation is therefore extremely painful, as penologists know. This preoccupation with approval derives from obvious social and evolutionary advantages. Our self-esteem may rise or fall with experience in domains key to the self. In turn culture determines to some extent what areas are considered salient domains. Research shows that self-esteem is more functional if based on more than one or a few domains. With many domains we can control the inevitable setbacks that life hands us.

Preoccupation with self-esteem is primarily a Western phenomenon. It is derived from the cultural focus on independence and personal distinctions. That Western respondents self-report higher levels of self-esteem, may be attributed to the greater modesty of interdependent peoples. Being rewarded or praised for achievement is more common in the West, whereas in interdependent cultures people are more motivated by common goals and self-improvement. Cultural differences in self-esteem are abstractions as again there are differences within cultures, and globalization is encouraging convergence in values.

False self-esteem is aggrandizement based on group memberships where the group operates by the scapegoating and demonization of outsiders. Gang violence is caused by false aggrandizement as compensation for all that is missing in the gang member’s life. Gang member’s display elevated self-esteem not justified by accomplishments or good works. Their fundamental insecurity is revealed by their sensitivity to perceived insults. Psychopaths posses grandiose conceptions of self-worth, but no genuine self-esteem.

The preoccupation with enhancement influences the way in which we associate with others. It leads to comparison between the self and the other for advantages looking downward or enjoying the reflected glory of the achievements of those with whom we associate. Friendships are based on the need for enhancement. When we select our friends we ensure that we can compare downward in most salient domains. In Western cultures self-enhancement is of overriding importance, especially when we are threatened by failure. In general most people believe that their positive traits are more important than their negative attributes. Self-enhancement leads, in fact, to better mental health, and better physiological and endocrine functions.

When the self-concept is threatened we shore up self-worth by reaffirming in other unrelated attributes of the self. For example, there is no greater threat than mortality. We control this essential threat through self-esteem, we assert that our lives are worthwhile and we rely on a worldview that makes life meaningful. When people are threatened by mortality they are easily manipulated and provoked to aggression. Threat to world-views or to conventional society undermines the cultural meanings that controls death anxiety.

In a complex world how do we find a path to well-being? In Western societies people have been convinced that consumption is the road to follow. However, well-being is related to the quality of life, to the journey of life, and to realistic expectations. Furthermore, our personality also matters. For instance, for some people a glass is half empty, for others the glass is half full and next to a plentiful bottle. It is important to pursue self-relevant goals that reflect that which we value in life. Regardless of cultural differences we all have basic human needs for autonomy, for competence to deal with challenges, and for a supportive social network.

Research shows that a complexity of attributes and self-efficacy is necessary for well-being. Respondents who possess more complex self-concepts are not overcome when facing a setback in a singular dimension. Self-efficacy is the feeling of “can do”, that we have the necessary competence to succeed. Self-efficacy grows out of early experiences with parents and educators. Our early success reduces experienced stress in life. Positive illusions refer to exaggerated optimism and sense of control in life. The well-adjusted often display positive illusions that can enhance, encourage, and motivate behavior. Those with positive illusions are happier and have better social relationships than the depressed that have more realistic conceptions. People in the West are especially likely to display unrealistic optimism about the future. The downside of positive illusions is that at times we must face unpleasant reality. Positive illusions are more likely endorsed in Western societies. Well-being in interdependent cultures is more related to fulfillment of roles and social expectations.

Impression management suggests that people are actors on the stage of life. Most people mold their behavior according to situational demands, we are chameleons according to need. Psychopaths are especially skilled at impression management. Since we all want to be accepted we work hard to convince others that our self-presentation is true. We encourage others to believe in our public face. Ingratiation is a form of impression management where we try to make ourselves more likeable to the powerful through flattery. Self-handicapping promotes face saving by engaging in self-defeating behaviors prior to performance. Sometimes people take foolish chances with health in order to preserve their face and image. Self-promotion is a more direct path of impression management. We seek to impress others of our competence, and our associations with others of status and power. It is primarily the publicly self-conscious who engage in impression management. People with private self-consciousness are concerned with independent thoughts and feelings. The social self emerges from social interaction in all cultures. The self-concept is therefore a consequence of cultural values. Saving face is of particular importance to Asian cultures. Central to these societies is the concern about roles and expectations, whereas people in the West are more concerned about individual enhancement.




Being Human. Chapter 4: Social Cognition: How We Think About The Social World

Every day we are confronted with situations requiring judgment and decisions. At times, in emergencies, rapid decisions are required allowing little time for reflection. In other situations, the outcome matters greatly and motivates us to carefully evaluate the judgment and consequences of our decision. Social cognition is a fundamental area of social psychology, and refers to how people utilize information in making decisions. Specifically, we will attend to how we select the information, how we interpret the information, and how we organize it to respond to the decision making demand.

In situations involving police or other emergency teams there is little time to evaluate. The police may have fractions of seconds to decide if a suspect is holding a gun or some harmless object and to subsequently decide either to fire to kill, or to pursue another line of action. How does a police officer make such decisions? There are those who would argue that in the case of suspects the police use race to determine whether a suspect is dangerous or not (Singer, 2002). For example, in Cincinnati, USA the police killed 16 black suspects in six years, while no whites were killed in similar circumstances. It seems reasonable to assume that prejudice played a role in these life or death situations in the United States. In other words, faulty decision-making is often a result of rapid response requirements based on often false social stereotypes. We have more to say about stereotypes or cognitive schemas later in this chapter.

On the more positive side, automatic thinking can also save lives. One of the authors recently had an accident, which caused 5 broken ribs, a punctured lung, and the loss of his spleen. He can recall every detail of what happened during the accident, and the efforts made to save his life. The emergency crew went on automatic thinking as soon as they saw his injuries, belting his body in several places, providing oxygen, and after questions about any allergies they started pain medication. In the emergency room there were similar very crisp questions as the surgeon ruled out other problems and directed attention to the needed surgery. This surgeon had a well-established memory of similar injuries and proceeded rapidly to address the injuries, and stabilized patient’s vital signs. As time was of the essence, these professionals were on automatic pilot, as they took steps to administer needed medical services. Automatic decision is rapid and carried to conclusion without a great deal of extended thought and reflection. In this type of social cognition people act as if without thinking, responding to internalized memory and experiences (Bargh & Ferguson, 2000; Sloman, 1996).

There are other occasions when the situation demands a longer and more deliberate evaluation process. How to choose a life partner, what occupation to adopt, what philosophy or ideology to believe in, are best decided on thorough and very careful evaluation. By thinking through all the issues, evaluating potential consequences of our decisions, we can make better decisions, resulting in more contentment over the long run. Although automatic thinking seems to dominate so much of social behavior, we do have the capacity to override the process, and analyze the situation slowly and deliberately.

However, neither type of thinking is error free as important information is often missing. Even powerful nations like the US make basic errors despite heavy investments in intelligence. We can observe that it is not information alone that determines inferences, but also ideology. Ideology allows the individual or group to incorporate and accept information. What comes to mind is the obvious fiasco of going to war in Iraq based on the assumption that Iraq possessed weapons of mass destruction. The intelligence services provided accurate information, that there were no weapons of mass destruction program in Iraq. However, since the decision to go to war had already been made, this inconvenient information was not incorporated in the decision-making. At other times, of course, the information we have is not only inconvenient, but also incomplete, ambiguous or contradictory. How we make decisions given the incompleteness of information is the basic question addressed in social cognition.

1. The process of making inferences from our own experiences
If our inference processes were in fact unbiased, we could all arrive at judgments that reflect reality. Unfortunately, drawing inferences is not such an even handed process, but rather one that is often dominated by errors and biases where we depart from logic and accuracy. To arrive at any inference is a process containing several interrelated cognitions. First, to make any judgment we must gather information. If you are trying to decide whether to work for a certain company you may want to know something about the company’s outlook on their workers, on pay and benefits, on vacation allowances, and in the long term, retirement plans. Some of this information will be more important than other knowledge about the company. For example, if you really need a job now, and you are young, retirement may seem a topic of little interest or concern. Part of drawing an inference therefore is to decide what information is useful, and then try to integrate that information into some judgment or decision.

1.1 Some sources of bias
Actual information gathering is, however, subject to several sources of bias that may affect your judgment. All of us have incorporated expectations into our knowledge base. You have learned from friends or others you trust that this company is very good to its workers. Yet, during your job interview you get the impression that the company has little concern for the well being of its employees, but you refrain from checking the truth of your impression. Prior expectations may cause us to draw wrong inferences (Nisbett & Ross, 1980). We tend to gather and attend to information that is consistent with our expectations. We are less likely to gather information that is inconsistent with what we expect, and because of that bias are therefore more likely to draw inaccurate inferences. Since a person is less likely to gather inconsistent information, prior expectations will bias the information gathering. Prior expectations may cause the individual to completely ignore any contradictory information, or at least to be skeptical of the accuracy of inconsistent information. People favor information that supports what they expect and what they want to believe (Ditto, Scepansky, Munro, Apanovitch, & Lockhardt, 1998).

Often our inferences are based on samples that are small or not representative. It is of course not possible to talk to everyone in the company where you seek employment, but if you talk to only a couple of people it is not likely that useful information will be obtained. In many cases that does not prevent people from making inferences anyway. We utilize what we know, even if that knowledge may be misleading. (Nisbett & Kunda, 1985). Today we live in a world in which statistics can describe just about any aspect of human life. The young person looking for employment can probably look up the company on the Internet and learn much that is useful. For example how profitable is the company, how stable is the management, are jobs secure or not. Here again we can observe a bias that seems characteristic of humans. Although statistics tend to be objectively based on averages or totals (and therefore more accurate), this information is frequently discarded in favor of anecdotal stories that emphasize information about specific persons or happenings. For example, the statistics about the company may show that they pay very low average salaries, but you have learned that an individual hired by the company managed to get himself promoted to a high position in just three years. Which source will be more powerful in your inferences about the company? Research suggests that the anecdotal information has more influence on judgments (Beckett & Park, 1995).

Another source of bias is the differential weighing given to negative information. More significance is placed on negative as compared to positive information, and it weighs more heavily when decisions are made (Taylor, 1991;Pratto & John, 1991). Illusionary correlations may also produce a bias in inferences. If our prior expectations suggest that two variables should go together they are often seen as correlating, whether that is factual or not. We have stereotypes about minority groups and violence for example. While there may be a little truth to some social stereotypes they never help us understand individual behavior. A minority individual may or may not fit the stereotype, hence illusionary correlations produce inaccurate inferences.

How decisions are framed may also influence judgments. Here the research points to the most basic factor in social cognition; i.e., are the decisions framed in terms of potential losses or gains? People become very cautious if alternatives are framed in terms of potential losses, but far more likely to take risks if framed in terms of potential gains (Kahnema & Tversky, 1982). If you are in charge of hiring our imaginary prospective employee you would emphasize the stability of the company, and a career that can only produce gains, not the fact that a third of the employees leaves the company each year. (Rothman & Salovey, 1997). In other words emphasizing the positive will make it more likely that the employee will take a risk on the company and accept employment.

1.2 Mood and emotion
Many of the errors we make derive from our commitment to evaluative beliefs. If we have a commitment to a particular idea, ideology or religion, then that emotional commitment may override factual information that is contrary to these evaluative beliefs. Emotion overrides rational decision making many times, particularly if the evaluative beliefs are of great significance and serve as a source of psychological balance. Of course emotions have also a very important role to play in accurate decision-making. Emotions may produce warning signals when a risky decision contains potential disaster. More and more researchers are coming to the conclusion that emotion and cognition go hand in hand, and provide complementary information (Gray, 2004).

Moods are more temporary, but can still have great influence on the decisions. When we are in a good mood we tend to get along better with others, and our inferences are affected. Even though moods may not last long, we can still make decisions in these temporary conditions, which have long lasting effects (Forgas & Ciarrochi, 2002). When people are depressed they tend to be accurate in making pessimistic predictions about the future, but less accurate in anticipating positive events (Shrauger, Mariano, & Walter, 1998). A mood of sadness may impair accuracy since it slows and promotes a more deliberate information processing when the situation requires a more immediate response (Ambady & Gray, 2002).

2. Biases in information presented firsthand and secondhand
We receive information from different sources, which provide bases for social judgment. Some of our information comes directly from our own interaction in society and our own experiences. Our culture, educational system, prevalent ideologies provide filters for direct experience. The discussion so far has already shown that there is unfortunately no one-to-one relationship between our experiences and accuracy in social cognition. What distortion occurs in memory that derives from our own firsthand experiences, and what distortions derive from others in society?

2.1 Believing everyone else is better informed
Most students will have attended a class in which the professor asked, after a particular difficult lecture, if anyone had any questions. Probably some students had questions, but since no one raised his hand they falsely assumed that they were deficient in knowledge since all the other students had understood the material. Afraid to show their ignorance the individual student along with everyone else therefore, did not ask any questions. This scenario is called “pluralistic ignorance” (Miller & McFarland, 1991).

It seems clear that underlying this distortion of information is the fear of rejection by teacher or classmates or not fitting into prevalent classroom social norms. Other researchers (Klofas & Toch, 1982) found similar results for prison guards who typically operate in a macho tough culture and therefore falsely assume that the other guards have no sympathy for the prisoners. Another study demonstrated pluralistic ignorance in drinking behavior (Prentice & Miller, 1993). One university had a culture of abusing alcohol, and the students generally assumed that this met with universal approval, when in fact their private opinions often clashed with this norm.

2.2 Biases in memory
Memory is not just a register of past events. In fact memory is an active process of cognition, which often changes what is remembered in significant ways. Again our wishes and desires predominate so what is remembered is what we want to remember more than what actually happened. For one, we never remember everything about an event so memory is an underestimate of what happened. More significantly, however, we sometimes remember things that never happened (Conway & Ross, 1984). These phenomena seriously distort judgment based on memory. In recent years there has been a great upheaval in psychology over the phenomena known as “false memories”. Typically these memories are about traumatic events, which happened early in life, are then forgotten, and later retrieved under therapy. In one very famous case a young woman, Eileen Franklin, accused her father of sexually abusing and murdering her best friend. Her father was sentenced to prison and served 6 years before it was established beyond any doubt that Eileen’s “recovered” memory was false. Still it remained her firm belief that her father was guilty. Many other cases of falsely accusing someone of sexual abuse are now part of the legal case history in the United States, and show convincingly the fallibility of human memory (Loftus, 1993).

Some memories are of events that occurred under dramatic circumstances. For example many people remember where they were exactly when significant events occurred in national or world history. Often even these apparently vivid memories show significant discrepancies from earlier memories of the actual event (Neisser & Harsch, 1992).

We all have ideas of how things should be, beliefs consistent with our beliefs and ethics. Research has shown that ideas about how things should be often change memories of how things were (Ross, 1989). In the US we have seen dramatic shifts in racial attitudes over the past decades. For example, the educational system used busing of students from minority neighborhoods to more integrated schools as a means of overcoming the negative effects of racism. In the early years, there was a great deal of resistance to busing among white students. However, over time their opinions changed and when they were asked to recall their earlier attitudes results showed considerable distortions in their memory in favor of the new modified opinions (Goethals & Reckman, 1973).

2.3 Information we obtain from other
On most of the large-scale issues of life we have little first-hand information, but rather must rely on others for our opinions. This information too is filtered through our belief systems, and through those who are the sources of information. How accurate is this information? Obviously we can never get a complete picture since describing an event in detail takes too much time. Therefore shortcuts are employed in order to convey that which in the eyes of the communicator is most important. This process of conveying information of the more important or relevant elements is called sharpening. At the same time irrelevant or less interesting information is left out, a process referred to as leveling.

Most of us have never met the president, the queen or the king of our country, or other famous or notorious people. Yet, that does not prevent us from having opinions about these public personalities. We develop our opinions from the views of those we respect, members of our family, television, and other news media. Again, we engage in a process of sharpening and leveling of information in the interest of a consistent image of the other person. Research shows, however, that such second hand derived opinions tend to the extreme. We are stronger in our dislike, and more flattering in our positive evaluations, than supported by our information. For example the opinion polls on president Bush show that currently he is the most unpopular president in the history of the US. Not so long ago (in historical terms) he was very popular. However, ratings not based on personal experience like opinion polls tend toward more extreme views. This tendency toward extreme views based on second hand information has been found in a number of studies (Gilovich, 1987; Inman, Reichl, & Baron, 1993).

2.4 Slanted views provided by the media
One of the major reasons for distortions is the role played by the media. To a large extent television in the western world is primarily mindless entertainment. Therefore the more exaggerated the story the more likely it will be included in the evening news. The news focuses especially on the negative and on catastrophic events. These happenings should of course be included in the overall picture of the world, but other news such as heroic efforts to help others or stories depicting goodwill are often excluded in favor of these distortions. In short the need to entertain a population, which is thought to have a very short attention span, supports the emphasis on dramatic and scary events, which reflects only a small portion of behavior or events in a country.

This has an effect on how people view the world. When you are bombarded every day with bad news, wars, murders, rapes, is it any wonder that many people become scared and believe that the world is a very dangerous place? The bias toward bad news in fact creates a world that is not realistic. For example, research shows that in television 80 percent of all crime is violent, whereas in the real world only 20 percent can be categorized as such (Windhauser, Seiter, & Winfree, 1991). Going to the movies presents an even more distorted view of the world as the emphasis is again on the violent, dramatic, and negative (Gerbner, Gross, Morgan, & Signorielli, 1980).

One consequence is that many people believe the world is more dangerous than it really is. A distorted picture of crime produces in people a heightened fear of victimization and insecurity. Although the murder rate dropped a little in the United States in the period from 1990-1998, television shows focusing on homicide increased during the same period by 473 percent (Center for Media and Public Affairs, 2000). Some studies show a relationship between the number of hours a person watches distorted television, and the fear of victimization (Doob & McDonald, 1979), especially by those who live in neighborhoods where crime is present.

2.5 Distortions based on ideology
There are those in society who have a vested interest in providing a slanted story. The objective is not so much in telling the truth as it is about persuading a target population of the justice of a cause. Social ideologies often lead the media and educational system to accentuate certain features of a story while excluding other important aspects. By suppressing inconvenient information an attempt is made to support certain beliefs about reality in the world. All societies in the world have such ideologies operating. Although many would proclaim the presence of press freedom in the Western world, there is much information that never sees the light of day. For example, few people in the US have any information about Cuba, except the very predictable condemnations one hears from time to time from the government. There is no information on Cuba’s achievements such as eradicating illiteracy, providing medical care, and other systems of social security. These ideological distortions are not carried out innocently, but are the consequences of deliberate policy and the news media conform to these expectations.

A fundamental question is why do people consume so much negative information? Why is there a preference (which we can observe by the popularity of television programming) for the catastrophic and negative news and shows? Does it make the individual feel better when he sees violence, but can say, “thank god it is not me”? Of course negative information may have some survival value. If we are presented with real dangers we are more likely to survive if we attend to these aspects of our environment. Perhaps such survival needs makes people more vigilant to potential threats (Rozin & Royzman, 2001).

Is information equally useful regardless of how or when we obtain the intelligence? Research by social psychologists shows that it matters greatly in what order the information is received. Also, even slight variation in the actual wording can have a great impact on people’s responses. The cold war produced mindless conformity in Western countries during which one’s own side was considered the repository of all that was good and praiseworthy, and the other side was just evil. Should it surprise us therefore that US respondents had very different views on whether reporters from socialist countries should be admitted to the US to report on the news, or whether US reporters should be admitted to socialist countries to do the same. In fact only 36 percent of US respondents thought that reporters from socialist countries should be admitted to the US, whereas 66 percent thought the socialist countries should admit western reporters. Later, very different results were obtained by merely changing the order of the questions. If the respondents were asked if US reporters should be given free access in socialist countries 90 percent said yes. Since that question was asked first it put some pressure on the respondents to be consistent and 73 percent agreed that reporters from socialist countries should have similar privileges. Still a lower number, but higher than the 36 percent who responded favorably when asked first for press freedom for socialist reporters in the US (Hyman & Sheatsley, 1950). This, and other studies (Haberstroh, Oyserman, Schwarz, Kuhnen, & Li, 2002) show that the order in which information or questions are presented can have a powerful effect on the respondent’s judgment.

Some research has shown a primacy effect; i.e., the information that is presented first is most influential. Other studies have demonstrated a recency effect; i.e., the information presented last is most powerful. The studies do not permit any overall conclusion other than it matters what order information and questions are presented. For an overview of which (primacy or recency) is most effective see Fiske & Taylor (1991).

Consequently, it is important to keep this in mind if one is developing a survey. Even if all precautions are taken by, for example, guaranteeing anonymity, the results can still vary widely. Those who have a vested interest in manipulating public opinion know that if the contents of the question are varied slightly, there will be a different result. Opponents in a political debate know how to spin the questions in order to obtain a desired result. One man’s terrorist is another man’s freedom fighter.

Some descriptions are key to an overall stereotype. In another classical investigation Asch (1946) showed that just including the words warm or cold in a person description containing many other trait words as well would completely alter the perception of the person described. Obviously we must be very careful in framing questions, knowing that the order asked, and even slight variations in the content can influence the outcome in significant ways.

2.6 Does motivation effect inferences?
We have seen that people often produce information that is largely self-serving, and develop inferences where the relationship of beliefs is coincidental to the truth. We want to believe in what we think will produce personal happiness, and we will take whatever steps necessary to keep incongruent information out. For example even though divorce rates are approaching 50 percent, most of those who marry do not believe these statistics are applicable to their relationship. In general we persist in believing that only good things will happen, and that bad situations can be avoided (Kunda, 1987).

We might think that if we were highly motivated we would make more careful decisions (Pelham & Neter, 1995). In general the results show that motivation is only of benefit if the decision is easy. If the judgment required is difficult, accuracy in decision-making decreases.

Studies have shown the ability to suppress feelings in various circumstances. You want to forget about a painful relationship, or some traumatic circumstance. As soon as the mind becomes aware of the unpleasant thoughts it can reduce the impact on consciousness by thinking of something else more pleasant (Foster & Liberman, 2001). Some studies also show that suppressing thoughts has a cost attached. Thought suppression requires a very hard effort that not only involves cognition, but indeed physiology as well. Some studies have shown a negative effect on the immune system through chronic thought suppression (Harris, 2001).

In general social inference is at best an imperfect process where we often make errors in favor of what we desire and want, rather than incorporating some standard of objective reality. Still, without the stereotypes and schemas that moderate social cognition, the complexity of information processing would overcome the average person. It is necessary that we remain aware of the cognitive pitfalls.

3. Automatic thinking and our use of schemas
As we have already noted not all social cognition involves careful evaluation. Often we react rather automatically to social stimulus as if we have ready-made responses stored in our memory. Automatic thinking is largely unconscious, and occurs without intentional effort (Bargh & Ferguson, 2000). The ready-made responses are called schemas; referring to mental structures we possess which function to organize our knowledge about social stimuli. These mental structures influence what information we attend to, what we think about, and what we store in long-term memory (Taylor & Crocker, 1981). Schema is a generic term for knowledge structures (e.g. assumptions or preconceptions) that define other people, what we are ourselves, and our social roles in society. What is a student like, what are the characteristics of a teacher or professor? Do students desire knowledge, and are professors those who like to help?

In each case a schema includes all our knowledge about the social category, as well as situations that are common. What is your schema for attending a football match in The Netherlands? Does it include noisy behavior by fans, and perhaps acting out by young people when the national team wins an important game? How do fans behave when The Netherlands wins an important match over archrivals? Are certain expectations in your mind part of your schema about football and fan behavior? What is your schema about the opposite sex? Does it include gender specific behavior, for example expecting more emotionality by females? Are males expected in your schema to be more assertive? In these and all cases we have stored schemas based on our past experience and what we have learned from others.

If we did not have schemas our lives would require evaluation of each new situation. Can you imagine the confusion of going shopping to buy products without schemas? Perhaps there are a variety of toothpastes. How can you choose one? If you have a schema your thinking would automatically be oriented based on previous trials or perhaps by advertisement. Without these mental structures not only would shopping be a long and painful experience, but also very confusing as a person has to examine all alternatives. Schemas therefore direct our attention in specific ways, and structure our memory for future use (Brewer & Nakasmura, 1984).

3.1 The function of schemas
Schemas are used to complete information that may be lacking in a specific situation. How do you expect people to behave who are members of specific national or racial groups? If you lived in the US you might have schemas of Black people that include your beliefs about their propensity for violent behavior. If you lived in The Netherlands, Norway or some other European country you may have schemas about immigrants that also include potential violence. Hence when you meet someone of a minority background research suggest that you selectively attend to cues suggesting hostile behavior. All cultures have deeply rooted stereotypes not based on personal experience.

The reason we have schemas is that they allow us to complete needed information prior to interaction. Having schemas gives you some clue on how to behave toward a given social group, or how to behave in a given role (like that of a student). Our schemas may of course be prejudicial, and have little to do with social reality. Still schemas are enduring because we want to believe what we want to believe, the truth be damned. However, without schemas our world would be a giant buzzing beehive with no order or direction. Schemas are important because when we are confronted with a new situation we can understand it better – or so we feel – from our stored knowledge of similar situations. They help us process information more efficiently, and help us understand what part of the situation we must attend to, and what is of less or little importance.

Schemas influence memory, what and how we remember a particular situation. In one study the participants were asked to watch a videotape of a husband and wife having dinner together (Cohen, 1981). Half of the students were told that the woman in the videotape was a librarian, the other half that she was a waitress. Subsequently the participants were asked to list what they remembered of the interaction. Interestingly, when the woman was described as a librarian the participants in the study “remembered” her drinking wine, whereas when she was described as a waitress she was seen drinking beer. In other words memories were influenced by the participant’s stereotypes of people in these two roles. What this and other studies show is that behavior consistent with a preexisting schema is remembered better and enjoys an advantage when it comes to recall (Carli, 1999; Zadny & Gerard, 1974).

3.2 Social stimuli and preexisting schemas
Based on our own experience and that of others we all carry schemas as part of our interpretive mental arsenal. How can these schemas be activated by social stimuli allowing for more efficient judgment and decision-making? One of the significant factors, which determine schema activation, is the person’s expectation in a given situation. If a police officer encounters a Black person in a dark alley is it his expectation that he is confronting a criminal? If so that will activate schemas already existing in the mind of the police officer, and any abrupt or threatening movement by the minority person could lead to an unjustified shooting. Such events have occurred repeatedly (Bargh & Ferguson, 2000; Sloman, 1996). These are all examples of automatic thinking where the minority person was perceived as threatening and the officers opened fire based on their preexisting schemas. As we have seen, some situations require rapid response, and in the US this frequently means shoot first and ask questions later.

Schemas are frequently applied in gender relations to help interpret what to expect from the other gender. For insecure people perceived threat may be part of their schemas. If a threat is perceived the individual will be less likely to take the risk necessary to build intimate relationships. One consequence of this schema is the greater likelihood of living a lonely life. Many studies have demonstrated the ability of expectations to elicit specific schemas which then serve to guide subsequent information processing (Hirt, MacDonald, & Erikson, 1995; Stangor, & McMillan, 1992).

Another critical factor leading to schema activation is similarity between the social stimulus and the preexisting schema. You turn on the television and see a football match in progress. If you are a fan you have seen many matches before, perhaps even by the teams featured. Consequently you possess schemas about the teams, the individual players, and the likely outcome of the encounter. In other words the features of a particular situation, a sporting event, a family gathering, or some other social happening will advise you on what schemas to enlist, and how to interpret what you are observing (Holyoak & Thagard, 1995; Spellman & Holyoak, 1992). The recency of schemas also leads to activation. If a schema has been employed recently it is more readily available, and therefore more likely to be activated given minimal stimuli. The importance of recent activation has been demonstrated in several studies (Ford & Kruglanski, 1995; Herr, 1986; Todorov & Bargh, 2002).

The importance of a schema determines to some extent activation. Probably every situation is capable of eliciting a number of schemas. Sometimes misapplication occurs as the same situation may elicit different schemas. War related schemas have affected US policies over the past several generations. One schema derived from the surrender to Nazi provocation prior to the Second World War. That schema leads people and decision makers to say, “We must stand up to dictators”. Another schema is the quagmire that the American war in Vietnam brought to US forces, and the desire not to repeat that experience. Politicians are constantly evoking schemas of both events in order to support or oppose a particular war related policy. Which of these two schemas do you think American decision makers employed with respect to the Iraq war? It seems clear that the war in Iraq took place regardless of contrary evidence that there were no weapons of mass destruction being produced. Recent reviews of the pretexts for the war showed without doubt that the reasons given for going to war were false. The only rationale left for that war was based on “we must stand up to dictators”, the schema of World War II. Thus the past has long arms that affect much of what happens today and in the future. Research has shown that it is not difficult to elicit either of the two war schemas with consequences for decision making (Gilovich, 1981).

When the situation is important it is more likely that several schemas are brought into play, and the individual may evaluate longer and make more careful and complex decisions. Research shows that when the outcome is important, and when some individual’s accountability is at stake the inferences produced are more complex and based on several schemas (Chaiken, 1980; Tetlock & Boettger, 1989).

Of course we do not all respond in the same manner to stimuli. There are always individual differences present, and the same stimuli may elicit different schemas. Some people are quite comfortable with ambiguity whereas others become very anxious unless situations are clearly defined. Differences in need for structure affects the need to create schemas. Intolerance of ambiguity requires that the person has in hand more or less ready-made responses. In short, those who do not tolerate ambiguity are more likely to rely on cognitive structures, whereas those with high tolerance deal with complicated situations with less reliance on schemas (Bar-Tal, Kishon-Rabin, & Tabak, 1997; Neuberg, Judice, & West, 1997; Chui, Morris, Hong, & Menon, 2000).

Is consciousness of stimuli necessary for activation of the schema? Can schemas get primed for action even if the individual is unconscious of the presence of the stimuli? A pioneering study (Bargh & Pietromonaco, 1982) showed that even when stimulus words were presented too rapidly to register, they still could affect the elicitation of specific schemas. Even when the stimulus is subliminal, below the threshold of awareness, the stimulus still functions to prime specific mental structures. This finding has been supported by many other studies (Debner & Jacoby, 1994; Draine & Greenwald, 1998; Ferguson, Bargh, & Nayak, 2005; Klinger, Burton, & Pitts, 2000).

3.3 Cultural differences
We shall in this book continuously apply the cultural concept of interdependent and independent societies outlined in chapter 2, as they have applications in a variety of situations and play a role in many social psychological constructs. Westerners and East Asians vary in how much they depend on the situation and on contextual information to come to conclusions. In general East Asians are more likely to rely on situational cues and environmental factors to explain behavior. Westerners are more likely to attribute behavior to dispositional causes; i.e., behavior is largely a function of the individual’s personality and mental structures. East Asians explain events by pointing to the context and the importance of the situation. The individualistic culture in the West predisposes people to attribute blame or success to the individual and thus ignore the social context. The thinking of East Asians seems more complete as attention is paid to the whole social environment, whereas Westerners focus on the acting individual (Ji, Peng & Nisbett, 2000).

Our schemas are to a large extent a reflection of our culture. What is important or significant in a culture is committed to memory, and the resulting schemas are ready for use in daily life. In western cultures there are new schemas related to developments in technology. In rural regions of Africa existing schemas may have to do with the local culture, and farming or cattle transactions. In one early study an interviewer compared what a Scottish settler and a local Bantu herdsman remembered from a complicated cattle sale (Bartlett, 1932). The Scottish settler remembered little and had to consult his records for specifics, whereas the Bantu herdsman could produce from memory a variety of data such as how many cattle were sold and for how much. One would draw the conclusion that since cattle transactions are a central part of Bantu economy they have developed excellent schemas for these cultural relevant data. In all cultures people are faced with a vast amount of information. Our schemas help us reduce this complexity to manageable proportions, to allow for efficient cognition and decision-making. Schemas are therefore a form of automatic thinking.

Schemas are based on the past but are used to predict the future. In the west prediction of the future is based on continuity. In general the world is seen to continue to move in the same direction it currently moves. East Asians on the other hand emphasize change. The Tao (the way) is an Asian symbol that views the world as being in one of two states at any given moment, always changing. The yin and yang getting better or worse, and stronger or weaker, are dualities that emerge from Taoist thinking. These ideas should predispose East Asians to think that current events are likely to change course, rather than staying on track in the current direction. For example if asked whether a dating couple will continue to date, Americans are likely to say yes (continue course), East Asians thought is less likely. In estimating economic growth rates for the world economy or likely cancer rates, Americans overwhelming believe that current trends will continue whereas Chinese are more likely to think they will reverse course (Ji, Nisbet, & Su, 2001).

3.4 The use of racial stereotypes and schemas
We have mentioned racial stereotypes before. A number of studies have demonstrated the presence of racial stereotypes and how they affect perception. In one study participants would repeatedly see a gun in the hand of a minority person when the individual was just holding a tool (Payne, 2001). In a study of video games the participants were asked to press a button saying shoot if the individual in the video had a gun, and do not shoot if he did not. The results showed that the participants were more likely to pull the trigger when the stimulus person in the video was Black, and whether or not a gun was present (Correll, Park, Judd, & Wittenbrink, 2002).These errors in perception are obviously based on schemas that Black people are violent. Our culture contains very persuasive schemas that link race and violence. These are examples of automatic thinking derived from society. Another example of the cultural direction of thinking were the different reactions to the publishing of cartoons of Mohammed in Denmark in 2006. In a variety of Muslim societies there was an automatic call for death for those who were deemed guilty of offense, which from a different cultural perspective seemed absurd.

In summary, schemas provide certain advantages in the psychological economy of the individual. They help us process enormous amounts of information. Otherwise we would be overwhelmed by the sheer complexity of our world. Schemas also help us recall information, information that is consistent with the schema as well as inconsistent information (Corneille, Huart, Becquart, & Bredart, 2004). We have already seen what might happen to delay shopping if we did not have schemas about products in the supermarket. One function of these mental structures therefore is to speed up processing. Often, schemas assist us in making automatic inferences. Having gender related schemas means that we have a starting point for interaction, and do not need to start over each time we meet someone of the opposite sex. On the whole therefore schemas assist us in interpreting situations and people, and may especially be helpful with ambiguous situations where information is limited.

There are obviously also disadvantages in the use of schemas. Many errors occur as we saw in the case of racial stereotypes. In general schemas lead to simplification resulting at times in wrong interpretations. To that we may add that once present schemas are difficult to change. Since they serve psychological security by making thinking automatic and efficient, we are reluctant to get rid of these ideas, even when they are misleading. People will believe what they are prepared to believe and what they want to believe.

3.5 The self-fulfilling prophecy
We have many schemas, some of which actually become true, because our behavior elicits the expected responses from others. Rosenthal and Jacobson completed the most famous study on what was called the self-fulfilling prophecy in 1968. They initially administered an IQ test to students in an elementary school. Subsequently they returned and identified some of the students as “bloomers”, i.e., some of the students were identified to the teachers as scoring so high that they were sure to “bloom” over the following academic year. In actual fact those identified as “bloomers” were just a random sub-sample, and therefore in no way different from the other students. The only way they differed had to be in the minds of the teachers who were told of their intellectual, but bogus academic gifts. Keep in mind that the students were not given any feedback, nor were the parents told of the results of the test. In other words an expectation schema was created in the teachers minds about this subgroup, which in actual fact was randomly chosen and had no particular gift. Could the mere fact that the teachers now had new and higher expectations (schemas) affect the students in some way to actually improve their IQ scores? That is what happened. The students labeled “bloomers” showed significantly greater gains in IQ scores when compared to the rest of the students. Similar results have been replicated in other studies (Blank, 1993; Jussim, 1991; Smith, Jussim, & Eccles, 1999).

What happened? Did the teachers just decide to give all their efforts to helping “bloomers” while disregarding the other students? That was clearly not the case in any conscious way. Rather the teachers had incorporated a schema about the “bloomers” abilities, and thus any differential treatment was a consequence of automatic thinking. Is it not amazing? There was no conscious attempt to treat the selected students differently, but that is what happened. This differential, but unconscious treatment was also found in other studies (Brophy, 1983; Rosenthal, 1994; Snyder, 1984). It appeared from analysis that the differential treatment included a warmer emotional atmosphere, more personal attention, and support. The teachers also challenged the “bloomers” to a greater extent with more difficult material, and provided better feedback. The teachers also included more opportunities for bloomers to participate in class. The self fulfilling prophecy operates by first creating an expectation schema, i.e. what is another person like, which in turn influences how the person is treated, which causes the person to act consistently with the original expectation.

Such self-fulfilling prophecies may have very negative consequences. Although girls initially perform better than boys in grade school, as time goes by girls begin to fall behind boys on standard tests (Reis & Park, 2001; Stumpf & Stanley, 1998). There are those who would argue that this change is due to different information processing by male and female brains (Geary, 1996; Witelson, 1992). However, it seems more likely that the change occurs as a result of lower expectations for girls by teachers, and perhaps also in the home, thus establishing a self fulfilling prophecy (Feingold, 1996; Hyde, 1997). If teachers are asked who are their most gifted students they mention boys much more frequently, and parents too believe their boys are brighter (Jussim & Eccles, 1992; Raety, Vaenskae, Kasanen, & Kaerkkaeinen, 2002). Are the significant people in the lives of girls treating them differently in ways that affect the self-concept, thus leading to lower levels of achievement? Yes, although it is not a conscious process, but a matter of expectations built into automatic thinking with long-range consequences.

Perhaps we also damage boys by having unfounded expectations, which nevertheless produce negative outcomes? Kindlon & Thomson, (2000) suggested that our schemas might well stunt the emotional development of males by expecting macho (violent and forceful) behavior, rather than supporting more healthy ways to express emotions. Violence in our society is at least partially due to such self-fulfilling prophecies. Since the self-fulfilling prophecy occurs automatically we reflect little on the consequences. Most people would be completely unaware that they practiced such discriminatory gender based behavior, as were the teachers in the aforementioned studies. Social psychologists may help by bringing to greater consciousness how schemas operate, and which expectations are thought significant in our culture.

4. Heuristics: mental shortcuts for rapid response
Often we possess mental shortcuts that allow us to make efficient decisions. Heuristics are not always accurate, but still provide for good decisions in a relatively short period of time (Gigerenzer, 2000; Gilovich & Griffin, 2002; Nisbet & Ross, 1980). Schemas often serve such a purpose based on our experience and that of others. There are situations, however, where we have no schemas. In other cases we may have too many, and we would need to try to select which is appropriate. Therefore, at times there are no ready-made schemas to employ. What to do? In these situations people use a mental shortcut called a heuristic in order to make judgments quickly and efficiently.

4.1 The availability heuristic: what comes easily to your mind?
In the case of the availability heuristic your judgment is based on what comes most easily to your mind; i.e., what is available (Schwarz & Vaughn, 2002; Tversky & Kahneman, 1973). If you have just read about something having to do with the situation, this recent information may be employed. At times what comes quickly to mind is the right solution. At other times it may lead to an inaccurate judgment. We sometimes use short cuts to describe ourselves. In the experiment by Schwarz et al., the participants were asked to find six examples of assertive behavior in one experimental condition, and another group was asked to find twelve examples in another condition. Those who were asked to think of 12 examples had difficulty in coming up with so many examples and consequently judged themselves as not assertive. Those who were only asked for six, since these examples came more readily for this group, concluded that they were in fact assertive. The ease by which people could bring examples to mind did determine self-judgment as predicted by the availability heuristic.

When something comes readily to mind it is because there are probably many such examples. Therefore the availability heuristic is often a good estimate of frequency. If you were asked to estimate the number of psychology majors at your university, how would you make an estimate? If you have among your friends or acquaintances many who are psychology majors you may conclude that there are also many enrolled at the university. If you do not know any, and none come to mind, you may conclude that there are only a few students who major in psychology.

The availability heuristic then enables a person to respond to questions about quantity or frequency based on how quickly such information is retrieved from memory (MacLeod & Campbell, 1992; Manis, Shedler, Jonides, & Nelson, 1993). If examples can be brought to mind quickly it must be because there are many of them. We can think of many more male presidents of countries than female, so we can come to the conclusion that there are more male presidents. We see in the news that most large companies have male CEO’s; that also comes easy to mind and we draw similar conclusions. The rapidness and ease by which these examples come to mind, i.e. are available, therefore become a relatively accurate guide to overall frequency or probability.

Of course people do make errors with the availability heuristic. Some events make deeper impressions and therefore are more readily available. If you had experienced a hurricane at the Black sea, you might conclude that this inland ocean is stormy. Others, who have only enjoyed sunny days at the beach, may think of the Black Sea as very tranquil. In the Kahneman and Tversky (1973) study the participants were asked if there were more words that began with the letter “r”, or more words with the letter “r” in third position. It was easier for the participants to think of words beginning with “r”, and they therefore estimated a higher frequency. In actual fact there are more words with the letter “r” in third position in English, but since they do not come readily to mind, the availability heuristic produced the wrong estimate.

We have also seen that when violence is over-reported in the news it leads to many people becoming fearful, a state of mind not justified by real statistics. The violence of video games may lead a young person to see a world of violence in which you strike first to avoid being a victim. In each case there is a misleading emphasis on the frequency of violence that is not reflected in the real world, but nevertheless affects behavior. In the western media reports of murder occur every day. In actual fact the US is the murder capital of the world with tens of thousands of victims each year. On the other hand we seldom hear about suicides in our society as they seem less dramatic, and therefore less newsworthy. This leads people to estimate that the murder rate of murder is higher that that of suicides, when in actual fact suicides outnumber murder by a 3 to 2 margin. Dramatic deaths get more press coverage and are therefore more available. Research shows an overestimation of deaths from accidents and other dramatic death and an underestimate of more silent deaths due to disease (Slovic, Fischoff, & Lichenstein, 1982).

Likewise, we tend to overestimate our own contribution to ongoing projects. Why? Because we are familiar with what we have done, and it comes readily to mind. In general people overestimate their own contributions, and underestimate that of others (Ross & Sicoly, 1979). Often people feel they are under-appreciated for the work they do, and likely this is because of misapplication of the availability heuristic. Essentially then, the availability heuristic helps us judge the frequency of some situations, the probability that certain outcomes will occur, or the size of some category by how readily examples come to mind (Schwarz & Vaughn, 2002). The ease of generating examples seems to guide our judgment.

4.2 The representativeness heuristic
Suppose you are asked if a specific person belongs to or is a representative of the national category Dutch. If you have limited information you might look for characteristics that match or are similar to a prototype you carry in your mind of the typical Dutch. With little information to go on people often use the representative heuristic or trying to judge based on degree of similarity. It is as if this mental short-cut tells you that a member of any population group ought to look similar to the prototype you carry in your mind. Does the person look Vietnamese, or Chinese, or Japanese? What category is the person judged to similar to?

If you think the typical values of psychology are pursuit of truth and the helping relationship, and you observe these traits in a person you might wrongly predict that the person becomes a psychology major in University. The function of the representativeness heuristic is to look for matching or similar behavior. Do murderers have features in common? If you are faced with such a person could you judge the person a member of that category? Obviously it depends on the accuracy of the prototype you carry in your mind. Many times people are surprised by the clean-cut appearance of serial or mass murderers in the western world. On the other hand we may have a good handle on other categories, such as members of racial or ethnic groups.

The representativeness heuristic also encourages specific correlated assessments between cause and effects. If “like” goes with “like”, we would expect that large causes would have large effects. A small earthquake would cause less damage, a large earthquake more. In other words small goes with small, large with large. However, that is not always true. We know that very small organisms can be deadly as in the case of the AIDS virus (Gilovich & Savitsky, 2002). Again, we must use caution when making such estimates or judgments. The symptoms of an illness do not always resemble the cause or cure, although the representativeness heuristic has influenced traditional medicine in that direction. For example in traditional Chinese medicine those who had vision problems were often fed chopped bats because bats were assumed to have excellent vision (Deutsch, 1977). Even today the representativeness heuristic continues to influence thinking about body and health. People are told to avoid milk if they have colds, because milk resembles the phlegm typical of cold suffers. In fact there is no relationship. Many of us have heard the term “you are what you eat”. Of course that is sensible to some degree. Eating too many calories will produce fat in the body. However, just because you eat only pork does not mean you will look like a pig or be piggish in your behavior.

Even in the pseudoscience of astrology we can observe a resemblance between the supposed sign and personality. Those born under the sign of Virgo (virgin) are supposed to be modest and retiring; whereas those born under Leo, the lion, are supposed to be forceful leaders of men. Obviously there is no validity to these pseudo beliefs, but that does not prevent people from believing sincerely. Even a powerful person like Reagan, the former president of the US, was a “true” believer (Abell, 1981; Zusne & Jones, 1982). It is kind of scary to think that the leader of the most powerful nation applied the representativeness heuristic and believed in such nonsense. Himmler, the exterminator in the Nazi empire, and other ranking members of the regime also believed in astrology. History is showed the foolhardiness and stupidity of these beliefs.

Other fields are also influenced by the representativeness heuristic e.g. graphology, the analysis of handwriting. It is a field of continued investigation, in which some reliable relationships have been found between handwriting and behavior (Nevo, 1986). If your handwriting is shaky perhaps it is a clue to a nervous personality or some neurological disorder. Doctor’s handwriting in the western world is generally considered unreadable. Does that say something about doctor’s personality, or is readability not a priority for busy and hardworking medical experts? If handwriting slants does that reveal anything about the person? Is the person who slants to the left more likely to be a good socialist, and those who slant to the right pro-capitalist? We may all see that these are absurd conclusions that reflect the representativeness heuristic. In short, the representativeness heuristic is a mental shortcut where we categorize something if it is similar to what is believed to be a typical or representative schema.

4.3 The problem of illusionary correlations
At times we may observe the availability and the representativeness heuristics operating together. When events occur together we are often led to believe they are correlated when in fact it is only coincidence we are observing. An illusionary correlation occurs when two variables are believed correlated, but in fact are not related (Chapman & Chapman, 1967). This is an issue of no small importance to psychology. For example clinical psychologists often rely on projective tests like the Rorschach and Draw-a- person tests to make clinical diagnosis of the mentally ill. Other research has demonstrated that these projective techniques fail most standards for reliability. For example in the Draw-a-person test the client is asked to draw a picture which the psychologist then interprets for signs of underlying mental illness. Clinicians report many connections between drawings and specific pathological categories. The drawings and the pathologies seemed to go together in the mind of the clinicians. For example people who suffer from paranoia are thought to draw very large or small eyes on the person depicted.

These illusionary correlations were investigated in the Chapman study. The investigators randomly presented 45 Draw-a-person pictures, 35 reportedly from mentally ill clients, and 10 from graduate students. Each of the pictures had a random description attached. There was no clinical relationship between the description and the pictures; the descriptions were applied randomly and not connected to the picture in any way. In one case the description was “is very suspicious of others”, or another “is easily frightened”. The results showed that although no relationship between description and picture was emphasized the participants observed the same clinical relationships as those of the clinicians. Large eyes, for example, indicated also to the participants’ paranoia. The participants observed the same illusionary correlations as the clinicians by the mere fact that they (the pictures) presented a joint operation of the availability and representativeness heuristics. In another part of the experiment the investigators asked which different body parts were related to which mental disease category. Again the respondents responded in similar ways as the clinicians employing the same heuristics.

4.4 Other cognitive short-cuts
We can also imagine “what could have been in a possible event, if only the conditions had been different”. Kahneman and Tversky (1973) called this the simulation heuristic. This heuristic helps us understand the psychology of near misses, or “if only something were slightly different”. If the couple driving had arrived at the railroad crossing only five seconds later the passing train would not have killed them. We use this heuristic for a variety of mental tasks, to help us understand regret or grief (Seta, McElroy, & Seta, 2001). For example if you go to the airport at the same time as another traveler, but both of you are delayed by traffic jams. The other traveler is told his plane left 30 minutes ago, whereas you are told that your plane left only minutes ago. Who would be the most frustrated? Undoubtedly you who barely missed the plane and who through the simulation heuristic can imagine a different outcome, like, “if you had only left ten minutes earlier”.

Counter factual reasoning is where some negative event leads people to think of more desirable outcomes given different circumstances. You did poorly on a test. You might tell yourself “if I had only studied more I would have passed” (Markman & Tetlock, 2000). Counter factual reasoning involves trying to imagine alternative versions of real events. What if this happened? When something unpleasant takes place does it help us to imagine how things could have been, with a different version of the event? We can in fact feel better if we imagine how much worse the event could have been. The couple was killed at the railroad crossing, but thankfully no one on the train was injured, we might reason (Taylor, Wood, & Lichtman, 1983). The simulation heuristic might also help you to prepare for future unpleasant events. Consider the following experience of one of the authors. On two separate years I fell from high ladders, and the second time I injured myself seriously, like mentioned before. I have often gone over what happened in my mind. I am standing at the top rung, my chain saw in my right hand, reaching out for a few remaining branches, taking a terrible chance that the ladder being insecure would give way. Well it did. It would have been so easy to avoid, like not standing on the highest rung, waiting until someone could support the ladder, or letting someone younger take charge. Simulating it I also realize I could have easily died as I lay injured on the ground. That from my perspective would be a worse outcome so I am lucky. I can also imagine that I will not find myself in the same position again. That is preparing for the future. I was highly motivated to change, one of the important functions of counter factual reasoning and the simulation heuristic (McMullen & Markman, 2000).

4.5 The anchoring heuristic
When we are asked to judge some event we need some reference point based on previous experience. How far will the Amsterdam Football Club AJAX reach in the coming Champions League? Since we really do not know, how can we come to some assessment? We can start by thinking of past Champions League, whether the AJAX-players this year are the same as last year, and the nature of the other teams in the league. The previous international competition becomes an “anchor” around which points can be added or deducted based on the other variables. The anchoring heuristic is simply a departure point for coming up with some reasonable estimate of some future event. Like in the case of other heuristics, the anchoring heuristic is a device for stimulating our memory, and eliciting the appropriate schema.

The anchoring heuristic may be also used to estimate the average number of supporters who will attend the home matches of Ajax in the Amsterdam Arena. Again you can reference the numbers from the previous competition, let us say 40,000 spectators. This time around you think there will be 56,000 spectators (fully booked stadium), the team is improved, and there is a new coach. The previous event again served as the anchor for estimating the current competition.

5. Intuitive versus controlled thinking
So far we have taken note of the evidence for two types of thinking. The first type is the automatic thinking represented by schemas and heuristic. The second more controlled thinking is represented by counter factual thinking and thought suppression. The difference between the two forms of thinking is the difference between intuition, which is automatic, and reasoning that is controlled. We seem to have two minds when addressing a problem, or two systems of thought. The presence of these two systems has been reported in many studies (Epstein, 1991; Kahneman & Frederick, 2002; Sloman, 2002). The intuitive system responds quickly to situations that require immediate decisions. Our past experience or cultural influence helps a speedy process via the aforementioned schemas and heuristics. The second reasoning system is controlled by nature and hence slower in processing information. Perhaps the decision is of great significance to the individual, or is perceived to have long term or broad effects, and hence requires a more deliberate process.

Whatever the problem one will always be able to provide an answer through the rapid process of schemas and heuristics. When the answer is not appropriate or useful, it may then be overridden by the more deliberate rational system. The rational reasoning process serves as a censor, or final check, in order to avoid the common pitfalls discussed previously. Tversky and Kahneman’s work on heuristics has had a profound influence in several areas including psychology, but also economics, management, political science and other fields (Gilovich, Griffin, &Kahneman, 2002; Tversky & Kahneman, 1974). The fact that so many fields have found the concepts of heuristics and schemas useful adds a great deal of face validity to the paradigm. Controlled thinking is defined as conscious cognition, where the evaluations are intentional, and as a consequence voluntary whereas automatic thinking occurs without any conscious effort. The second mode of controlled thinking serves as a check or balance for automatic thinking. If a decision from automatic thinking is not functional or contains problems, and if the issue is important, the individual will be motivated to reevaluate.

Think of the commercials that are played on television. Often these advertisements are on the screen for only a few seconds. The objective is not to have the viewer go through a process of the pros and cons of the product. In selling a particular kind of toothpaste the manufacturer does not want to engage in controlled thinking, or have you go through a serious process of evaluation as to which is best from the point of dental hygiene. All they want is to engage your automatic system to create schemas and name familiarity. Next time you go to the supermarket you will not engage in some dialog with your inner self, “yes, this product is better, I know the research”. No, rather than such a deliberate process the advertiser manipulates the unconscious mind associating the product with simple slogans “will make your teeth brighter”, or “9 out of 10 dentists recommend this toothpaste”. Neither assertion has to be true, but if they are implanted it may affect your purchasing behavior (Chaiken, 1987; Petty & Cacioppo, 1986; Petty, Priester, & Brinol, 2002). In many ways political campaigns are based on similar automatic manipulations.

Suppose however, that the message on television is sufficiently significant to encourage you to turn off your internal automatic pilot and listen carefully. Some studies do show that when people face significant tasks and decisions they will make more complex and accurate decisions (Kruglanski &Webster, 1996). On the other hand, when it does not really matter what the outcome is, your life will not change regardless of the brand of toothpaste you buy, the automatic pilot will dominate (Kruglanski, 1989; Trope & Lieberman, 1996). Even when people make efforts to understand the world they will still make many errors. We are still influenced by wishful thinking, and our belief systems will still override any evidence to the contrary. Training in the scientific mode of thinking, sufficient skepticism, are important defenses against illusionary thinking. We can observe in any culture very intelligent people who still will maintain absurd thoughts and beliefs. Intelligence alone is not a sufficient defense against deluded beliefs and behavior. Rather, we must be skeptical of ourselves, and repeatedly revisit decisions to see if they conform to some objective standard of truth (Wilson & Brekke, 1994).

5.1 Automatic thinking governs much of our behavior
The amount of research on heuristics and schemas should also suggest that these forms of thinking are of great importance to the psychological economy of the individual. In our busy and complex world we could not exist unless we had rapid response systems that might be more or less accurate. There is also a strong need for more complex reasoning as noted above. For example, we have seen how false minority stereotypes can have very negative consequences for individuals and society.

Automatic thinking is so persuasive in all areas of life, and yet we by and large remain unaware of its presence. Technology has brought us to the point that machines mimic the human condition. Just like people modern jetliners manage very complex operations including takeoff and landing by automatic pilot, a computer based response system. Only in emergencies is the automatic response system is inadequate, and the pilot must take over and save the plane. It is also important to remember that we might think we are controlling our thinking, and our behavior is therefore rational, when in fact we are just rationalizing decisions made previously by automatic pilot. Beliefs in our rational behavior can be just another illusion (Wegner, 2002). In fact despite our beliefs in our rational thinking it might still be controlled automatically or by the environment, we have just placed a more desirable label on it. Even when we believe, sincerely, that our behavior is based on rational thought it may in fact be quite automatic. To develop rational human behavior is perhaps more a goal than a reality for most people.

5.2 Is the development of rational thinking a hopeless project?
Shall we give up or are there some things we can do in education that might improve controlled and deliberate thinking? Many of the problems we have discussed in social cognition could be ameliorated by training in statistics and research methodology (Nisbet, Fong, Lehman, & Cheng, 1987). Training in economics and other forms of logical education may also help (Larrick, Morgan, & Nisbett, 1990). Teaching people basic statistical skills would help the reasoning process as statistics is a system of logic that is the foundation of all scientific enterprise. Such courses would involve the ideas of probability, how to generalize from a small sample to a population, and the nature of random sampling. In fact studies have demonstrated that our reasoning powers may be improved through such courses (Crandall & Greenfield, 1986; Malloy, 2001; Nisbet, Fong, Lehman, & Cheng, 1987). This aforementioned research shows also that students in psychology and medicine improved more than those enrolled in law and chemistry. Among psychology graduate students the improvements were especially impressive. This finding should be an encouragement to all engaged in the psychological enterprise. Perhaps at some point all students at a given university should take statistical courses to reason better, become better scientists, and more informed citizens of the world. If our students are trained well in the sciences, and develop the appropriate skeptical attitude toward all knowledge, there is some hope that mystical, stereotypic thinking might be reduced in favor of better decision making.

We might also ask people to consider whether they might be wrong .In one study people were asked to consider the opposite point of view. When asked to do this they often realized that there were different ways of construing the world (Lord, Lepper, & Preston, 1984; Hirt & Markman, 1995; Mussweiler, Strack, & Pfeiffer, 2000). People can be trained to use their minds and avoid simplistic and automatic responses. It obviously is a major responsibility of the educational system to inculcate skeptical attitudes in young students from the earliest. Instead in most nations early school is used primarily as a socialization tool to encourage conformity to social ideology and standards. Of course all nations have the right to socialize children and young people. In doing so, however, they create schemas that permit automatic thinking. The call by people in the streets of Afghanistan for death against those who are believed to defame the Prophet are results of such schemas, as is most of the international violence in the world.

6. Social cognition and clinical psychology
All human beings make judgments about others, and as we have seen psychologists are subject to similar errors. We all walk around with “implicit” personality theories in judging other people, yet remain completely unaware of what influences our judgments. Our stereotypes are examples of such theories. We might say “women are emotional” or “athletes are aggressive” or “sales people are extroverted”. These are all examples of implicit personality theories that serve as the aforementioned schemas in easing our interaction with others. We often do not have a good handle on what influenced such thinking (Nisbett & Wilson, 1977). We also judge ourselves. In general we tend to believe what is said about us, as long as it is positive (Shavit & Shouval, 1980). What guides acceptance of self-descriptions is the degree of positive traits included in the assessment. Up to a point the more favorable the description, the more it is accepted as factual. This low level of cognition can also be observed in cases where people accept fake self-description as equally valid, or in some cases even more valid, than those based on objective testing. People are not able to distinguish between the validity of real descriptions or those that are pure inventions. We seem to have endless capacity for self-delusion.

Professional clinical psychologists are subject to similar errors. Often clinical judgments are based on projective techniques that have little reliability or validity. But the patient is impressed by the clinicians and believes in the diagnosis. The consequence of the diagnosis takes the route of the self-fulfilling prophecy. The clinician believes in the presence of certain pathology. He then treats the patient accordingly. Pretty soon the patient behaves consistent with these expectations. Professional judgment is subject to illusionary correlations seeing relationships where really there are none. Psychologists often become over confident by searching only for confirming information of the diagnosis rather than keeping an open mind. Followers of Freud will visit and revisit childhood, and will soon enough come up with a host of events which by themselves may have had little effect, but in confirming a diagnosis are seen as evidence for pathology. In believing there is a relationship, we all, including clinicians, are more likely to see confirming than disconfirming evidence. This is true not only for psychologists, but for all those who contemplate human behavior whether economists or political scientists. Even physical scientists who were convinced the earth was flat used considerable energy to maintain that illusion, including sanction by religion.

Hindsight is always right. As we say hindsight is 20/20, meaning that in looking back we have perfect vision. In one famous study Rosenhan (1973) and a number of his associates got themselves admitted to mental hospitals complaining that they heard “voices”. The claims were bogus, but were offered in an attempt to assess the judgment of clinicians. Otherwise the “patients” reported truthfully their life histories and exhibited no further symptoms. Most were classified as schizophrenics. The clinicians, who found “evidence” in the life story told, when in fact the patients had no pathology, then confirmed the mental illness diagnoses of the bogus patients. When Rosenhan later told the mental health workers about the experiment, he also advised them that more bogus patients would seek admittance. During the following three months 193 patients were admitted. Now the mental health staff accused up to 41 of being bogus patients who were in fact in need of treatment. In reality, Rosenhan sent no further bogus patients during the period. These results cast serious doubts on clinical judgment in the case of abnormal behavior.

Clinical psychology often has its findings confounded by diagnoses that are confirmed by looking only for supporting evidence. Snyder (1984) found evidence that clinicians look primarily for information that will confirm the traits they have diagnosed. Our beliefs about what is true generate information that confirms it, based on the process of selective perception (Dallas & Baron, 1985; Snyder & Thomsen, 1988). In several experiments it was shown that people will first look for confirming evidence before seeking disconfirmation. This bias is not at a conscious level. Our questions are biased by our desire to have the diagnosis confirmed. People who undergo therapy therefore become the persons that their therapists believe they are, having searched and found evidence for their pathology. We can see that intuitive reasoning is very flawed, and may at times do actual harm to the client seeking help.

6.1 Intuition versus statistics
Although most clinicians continue to have confidence in their clinical insights, intuition is a poor second best when compared to more objective methods. For example admission to university or graduate school is often based on a combination of statistical measures. Such objective measures consistently outperform any subjective judgments in predicting student success (Dawes, Faust, and Meehl, 1989; Meehl, 1954; Meehl,1986). We have already noted the superiority of logical and statistical reasoning, although we recognize that clinicians work in very difficult conditions and often in unchartered waters where intuition must play some role. It is important, however, to remember that patients and clinicians are subject to the same errors as other human beings.

In summary, we are often unaware of what particular influences, past or present, which influence our judgment of others. Selective perception may encourage inaccurate assessments. This is particularly true if we rely, as most of us do, on the stereotypes of society. All societies inculcate stereotypes about categories of people, gender, professions, ethnic groups and so forth. While there are elements of truth in stereotypes they are for the most part gross exaggerations. Our self-perceptions are particularly unreliable. Every time people go to eat Chinese food they are given a fortune cookie as dessert. Inevitably the fortune cookie encloses a written fortune. Equally inevitably the fortune is written in such a way as to be applicably to everyone. Some people however, see particular meanings in what is after all random messages. Positive assessments are nearly always accepted, whether justified or not.

Mental health workers are subject to similar problems in social judgment. They may through intuition provide worthless diagnosis, and their clients being convinced of the therapist’s professional competence readily accept the judgment. After making the diagnosis the process is essentially one of confirming the decision. In psychoanalysis, for example, the “child is the father of the man”, therefore the therapist examines early childhood for clues to current problems. Since all people have experienced some issues in growing up it is not difficult to find the supporting data. Once the judgment is made, these erroneous diagnoses can easily be confirmed leading to the self-fulfilling prophecy. Again, the proper attitude is always having an open mind. By being skeptical of ourselves we can avoid some of the many errors described in this chapter.

6.2 Social cognition and mental health
Correlated cognitive processes that affirm the patient’s maladaptive life perspective accompany mental ill health. We can ask what are the thought patterns of the troubled personality. Some patients withdraw from social interaction, feel unworthy, and lose interest in family or the social environment. Having a very pessimistic outlook on life may therefore affect perception of experiences. What are just normal struggles for a healthy person can become insurmountable obstacles for the troubled person. Cognition plays an important role in perpetuating ill health, and therefore improvement may come about from reassessing how we think about ourselves.

6.2.1 Anxiety and cognition
The most fundamental problems in mental health are related to anxiety, and especially excessive anxiety. Some people are so anxious in social situations that they are unable to converse, effectively meet others, or apply for a job. Such anxiety can have sad consequences for the individual. An anxious person is less likely to lead a successful life, less likely to find a happy relationship, or master possible employment opportunities.

Why are we anxious? In many cases anxiety derives from our desire to make good and acceptable impressions on others. Fearing rejection is a primary cause of social anxiety (Leary,1984; Maddux, Norton, & Leary 1988). The aforementioned research indicated several significant social situations that produce anxiety. Applying for a job where we meet a powerful person who has the power to hire and fire is one cause. Other powerful persons include teachers, police, and other sources of authority. Any situation where we are likely to be evaluated is a primary cause for anxiety. Perhaps when you meet the family of your boy or girlfriend the first time, and you have a high desire to be accepted, perhaps as a student if you make a presentation in class and want to make a good impression on fellow students as well as the professor. Anxiety is also likely if we find ourselves in some new situation for the first time, and are unsure of correct or proper responses.

Shyness is a personality trait since we all vary in that dimension from others who are very adapted and extroverted to those who are extremely self-conscious. Some people spend all their lives worrying what others think of them (Anderson & Harvey, 1988; Carver & Scheier, 1986). The social cognition of extremely shy people tends toward overestimating events as having personal consequences, and where they feel without evidence that people are evaluating them in some negative direction. Alcoholism is often a consequence for those who are anxious. Sadly it just reinforces feelings of worthlessness, and of course also provides an alibi for failure (Snyder & Smith, 1986). Our lives become what we think they should become.

6.2.2 Cognition and depression
Some form of negative thinking is central to depression. Depressed people view their experiences in very negative terms, and minimize what is good in their lives. Cognition is therefore distorted. Does the distortion antedate the depression, or follow the depressed feelings? Either way social cognition leaves the person in a trap of thinking worthless thoughts which in turn are expressed in lower work output and troubled relations with others. That social inadequacy in turn reinforces the feelings of hopelessness and of being inadequate. More importantly the depressed person’s behavior is likely to elicit rejection by others. If your work suffers from depressed feelings and thinking, is that likely to lead to a promotion or demotion? Depressed thinking is very self-defeating because it elicits in others the rejection that the anxiously depressed person wants to avoid in the first place.

Is depression a consequence of having unrealistic views of oneself and others? In severe depressions distortion in thinking is present. However, mildly depressed people often make more realistic judgments than non-depressed people (Alloy & Abramson, 1979). On the other hand non-depressed people are more self-serving and exaggerate their sense of control in life (Dobson & Franche, 1989). Perhaps optimism, even when not warranted helps the individual to cope more effectively.

Among very depressed people thinking is dominated by self-blame, and self-attributions of personal responsibility. Sweeney, Anderson, and Bailey (1986) showed that depressed people compared to others are more likely to develop a negative attributional style, where they attribute failure to internal causes and faults. They tend to think depressing outcomes are going to last and are permanent, and will affect everything in life. Such self-blame leads to a sense of hopelessness (Abramson, Metalsky, & Alloy, 1989). So perhaps it is useful to be a little delusional, to emphasize the positive in self-presentation. Such distortion in thinking may help us be happier and lead more productive lives. Of course self-delusion can also have negative consequences when we ignore real problems that need correction, or take unnecessary risks.

Is it negative thinking that causes depression, or does depression cause negative thinking? There is little doubt that our mood effects how we think. If we are depressed the feeling permeates everything in our lives, and the world is a gray and unfriendly place. Depressed people have views of their parents as punitive and rejecting. Once brought out of their depression they tend to view their parents in positive ways as do people who have never been depressed (Lewinsohn & Rosenbaum, 1987). With depression our memory is affected as we recollect childhood events or relationships. Our relations with others are negative, our hopes diminish, and the world seems more sinister (Mayer & Salovey, 1987). Forgas, Bower, and Krantz (1984) used hypnosis to create depressive or positive moods. The participants were then asked to view the same tape under the two conditions of happy or depressed mood. The results demonstrated how mood affects our perceptions and our cognitive judgment, with the same tape being judged differently depending on the induced mood.

One major problem for depressed people is that they often elicit negative reactions from others, and sadly they can also contribute to reciprocal depression in family and those who associate with the depressed person. Depressed people produce depression in those with whom they associate. Hence it is no surprise that they are more likely to be divorced or fired from their jobs. All such rejection of course intensifies the depression (Coyne, Burchill & Stiles, 1991; Sacco & Dunn, 1990). From these findings we can answer our question, yes depression has an effect on cognition and perception.

6.2.3 Can negative cognition produce depression?
Now we come to the second part of the issue. Does negative thinking come before depression, and therefore be a cause? Some research supports this contention (Sacks & Bugenthal, 1987). When we adopt a negative attributional style depression is likely to follow. Lewinsohn, Hoberman, Teri, and Hautziner (1985) describe the process as one of a vicious cycle. The negative attributions and expectations contribute to rejecting experiences that leads to unrealistic self-blame which in turn reinforces the depressed mood (Seligman, 1989). We can see now that depression can be both a cause as well as a consequence of self-blaming cognitions.

7. We live in a lonely world
Loneliness is also related to self-defeating cognitive styles. Lonely people like the depressed are locked into a self-defeating vicious cycle where they blame themselves for their social inadequacy, and generally feel a lack of control in their lives (Anderson & Riger, 1991). Another distorted cognition is a negative view that lonely people have toward other people. You are not likely to establish relationships with others if you somehow convey your general negative views. People will seek company that is reinforcing of their self-perceptions and whose relationship is experienced as rewarding. Lonely people therefore create negative impressions in others that few are likely to test in long term relationships.

7.1 Negative social cognition and our health
Do negative cognitions that are accompanied by negative emotions contribute to poor physical health? Health psychology is a relative new field as the Division of American Psychological Association was formed in 1979. It has long been viewed likely that stressful events, if not handled well by appropriate cognition, may impact a variety of physical diseases. Some diseases thought implicated include heart disease, suppression of the immune system (making the individual more vulnerable to a variety of disorders), and effects on the autonomic nervous system (leading to head aches, and eventually to hypertension).

Heart disease has been linked to the anger prone personality (Friedman, 1991). Under stress it is believed that hormones contribute to the building up of plaque in the arteries bringing on serious heart disease if prolonged. Long-term stress may also compromise the immune system producing vulnerability to a variety of diseases (Cohen & Wiliamson, 1991).

7.2 Optimism: taking control of our lives
Living in the western world today is living in the midst of multiple demands and stress. As globalization proceeds, so unfortunately will also the associated stress of our fast paced lives. In the last couple of decades people have become more aware of the negative health effects of common stress reduction means employed by millions of people throughout the world. These include drinking to excess, smoking, and the pervading drug culture. All these means of escape have very negative consequences and claim each year millions of victims to cancer, heart disease and strokes.

A new health culture has emerged in response to these statistics. More people today walk or ride bicycles than in the previous decades. Many people have opted for a better life style, trying to maintain vitality as the human lifespan allows. Health clubs have emerged where people in sedentary jobs can get the exercise needed and reduce stress at the same time. Since stress is such a major culprit in health issues there is also more awareness of the need to relax, and in developing supportive relationships to overcome loneliness. Even tobacco companies have become so defensive with their health robbing products that they now also advise on how to cease smoking. These activities are for the most part hypocritical given the highly addictive nature of nicotine. Once they get a young person to smoke they often have a costumer for life.

Over-eating is another attempt to escape stress and associated anxiety. When people feel their lives are not satisfying they often escape into the fast food culture of today. In the Western world many believe that fast food restaurants like McDonalds are mainly responsible for the fat epidemic among children and adults. Currently there is a movement to reduce access of these unhealthy foods in the school system.

However, despite such logical efforts to improve health, many suffer ill health from the self-defeating cognition previously discussed. Negative attributional styles lead to self-defeating behaviors, and a vicious cycle of self-recriminations. Just like pessimism may lead to ill health so too can rethinking and developing a more optimistic assessment help defeat hopelessness.

Early researchers (Visintainer & Seligman, 1983) showed in an animal experiment how one could induce learned helplessness. Rats were given electric shocks in two conditions. One group was given shocks, but with the possibility to escape from the painful stimuli. Another group, however, was tied to the electric grid and not allowed to escape. The latter group developed what the experimenters called learned helplessness. Since it did not matter how much they struggled, the rats could not escape the noxious stimuli, the rats became passive and listless. The experimenters noted many negative health effects of learned helplessness including cancers from compromised immune systems. Stress is a culprit in disease (Dixon, 1986). Peterson & Seligman(1987) suggested that if pessimism brings ill health then perhaps optimism could help reverse these effects. In the study optimists outlived pessimists. In another study on terminal cancer, patients who developed an optimistic cognitive style outlived those who were pessimistic (Levy, Lee, Bagley, & Lippman, 1988). Hopelessness and pessimism compromise the immune system leading to early death (Kamen, Seligman, Dwyer, & Rodin,1988).

Social psychology has made a contribution to better health by emphasizing that we are what we do, our behavior often produces attitudes and emotions. If we can change behavior perhaps the thinking and emotional consequences will also change. Behavior therapists maintain that inner dispositions simply follow behavior. If a person is shy the behavior requires assertiveness training and the shyness will change or disappear. Rational-emotive therapy states that emotions are the consequence of our thinking. If we consistently and chronically say negative things about ourselves, our emotions will be consistent with this negativity. If we change how we think, it should have positive consequences for how we feel (Mirels & McPeek, 1977).

7.3 Reversing negative attribution
The aforementioned negative attributions are maintained by our negative cognitive styles leading to self-defeating behavior. However, it should be possible to reverse the negativity by reversing negative thinking, and engaging in therapy like assertiveness training that directly confronts the problem. Since the negative attributions are not supported by who the person is, but may be the consequence of negative life experiences, it is possible to reverse these attributions through therapy as suggested by Abramson, (1988). Changing attributions (taking credit for the positive and more realistic assessments of the negative) helps depressed people in achieve higher self-esteem, and lower depression. By changing how we think we can improve our emotional health.

Summary
This chapter reviews some of the research on social cognition. How do people utilize information in making decisions? How do they interpret, and organize responses to stimulation in the social environment? Part of the debate concerns two types of thinking, automatic and controlled thinking. Automatic thinking requires no evaluation, like responses during a crisis. Other decisions, such as choosing a life partner, require more careful evaluation that is controlled thinking. Neither type is error free, as we are influenced in many ways. Still we have to make decisions in spite of this often very incomplete information, errors, and biases.

Information derived from our own experiences reflects many sources of bias. Our expectations determine what information we gather, and what information we attend to. People favor information that lends support to their expectations. At the same time, we tend to give excessive weight to negative information that leads to illusionary correlations and stereotypes. Furthermore, decisions are often based on very small samples that are highly inadequate. Finally, anecdotal information appears to be a powerful but unreliable influence.

There is also a tendency to believe that other people have information not possessed by the individual leading to a state of pluralistic ignorance. Another bias influencing cognition and decision-making is bias in memory. What we remember corresponds with what we desire and wish at this moment. Memory can also be manipulated by therapists who implant “false memories” and encourage the patient remembers abuse for example that never happened. Even our memories of dramatic events from the past changes with the passage of time. So nothing is permanent in memory, all memory is malleable and how things should be changes to how things are in current memory.

However, many of our memories do not come from our own experience. Most of us will have no personal experience with the powerful people or events that shape the world we live in. Rather we obtain information from significant others, and from the media and use this as reference in our decision-making. Unfortunately the media is not an unbiased source of information. The term yellow journalism comes from the tendency to manipulate the news, and the emphasis on the dramatic and the negative. The media reports more violence and produces more fright than justified by objective statistics. In addition to the media the ideology of society or of powerful groups in society, provide their own unique slant. Often they are not providing information as such but try to persuade the individual.

Motivation and mood also play a role. People believe that what is real in the world is the information that is congruent with their vision of happiness. Being motivated, however, does not necessarily lead to more accurate judgments. Of course we have some ability to regulate our thoughts and feelings. In experiments on thought suppression such exercises often come at a high cost. Moreover, a commitment to powerful evaluative beliefs overrides any appeal to rationality and decisions made under temporary moods, may yet have long-term effects.

Not all thinking involves careful evaluation. In fact we have mental structures called schemas, which organizes our knowledge in preparation for automatic thinking. If we did not have these mental structures we would have to evaluate each new situation. By directing our attention in specific ways, and by completing lacking information, schemas provide an immediate basis for interaction. How else would we know how to behave when approached by a member of the opposite sex or other social category?

What activates these mental structures? Research point to three factors in activating schemas. First, the expectation of a certain situation or interaction will elicit schemas from our mental, storehouse (e.g. females are more emotional). Secondly, the similarity between the schema and a social situation may trigger the schema (e.g. last year’s national cup final, and estimation of the results of this year). Thirdly, how recently the memory was used in cognition may also lead to activation of schemas. Finally, a conscious process does not necessarily elicit some cognitive structures of the mind as subconscious stimuli have been shown to produce schemas.

If the situation is important a more deliberate controlled process may overrule the automatic process of schemas. Individual differences in need for schemas are significant. Those who have little tolerance for ambiguity also have high need for automatic structures.

Research has also demonstrated important cultural differences between Western and East Asian respondents. East Asians are more cognizant of the broader environment of behaviors and their schemas reflect this understanding. Western respondents view behavior more as a function of the individual. These differences can also be observed in the prediction of the future. Western respondents have an expectation of continuity; i.e. the future will be a continuation of the current situation. On the other hand East Asians are more likely to expect discontinuity or change in the future.

Mental structures like schemas have great influence on memory. What we remember is largely a result of what our schemas direct us to attend to in the situation. Prejudice finds easy support by attending only to events that support our stereotypes. The purpose of schemas is to make interaction more efficient, but when predicated on error they obviously cause problems. Sometimes schemas result in actual behavior. The reason is that we often behave consistently with our expectations toward others, and therefore others fulfill our expectations. This self-fulfilling prophecy is a problem in education, with respect to gender issues, and in the diagnostic process in clinical psychology.

Besides schemas we also have heuristics at our disposal. Heuristics are mental shortcuts that assist in efficient evaluation and judgment. The Availability Heuristic refers to concepts that come most easily to mind. If something comes readily to mind it must be because there are many such examples, and hence is a good estimate of frequency. However, an error in estimation is possible using the availability heuristic. For example, there is a great deal of violence in the media leading people to overestimate the real violence in the world.

The Representative Heuristic allows for judgment of how similar A is to B. For example it is possible to compare a person to the typical representative existing in our minds. How similar is the target person to a Dutchman? If similar, we may interact on that basis. The Representative Heuristic is also demonstrated in the expected correlation between cause and effect. If the earthquake is large we expect the damages to be large. This heuristic can, however, also yield errors. For example, very small organisms like HIV, can cause very large damage.

A possible effect of the Representative Heuristic is illusionary correlations. This is the case when two variables are thought to be correlated, but the association is only a coincidence. Such correlations occur in clinical psychology. For example in projective tests it was thought that large eyes drawn by the client were a sign of paranoia. Illusionary correlations occur at times through selective perception. Other mental shortcuts include simulation and counter factual reasoning, where we imagine some alternative events than that which happened, and thus prepare for similar future events.

Schemes and heuristics are examples of intuitive or automatic thinking. When the issue is of great importance, controlled thinking may override the automatic. Or perhaps the automatic thinking is not working. You are using toothpaste that promises whiter teeth, but it does not happen. You might eventually think about other alternatives, a different toothpaste or some other whitening procedure. Automatic thinking governs most of our behavior although we are not aware of the influence of schemas or heuristics. However, it is possible to encourage rational thinking. In particular courses in statistics and logic may be helpful in overcoming mindless automatic thinking. Inculcating a scientific mode of thinking is very helpful on the road to rational thinking and behavior.

In clinical psychology we see that human beings, including clinicians, have an endless capacity for self-delusions. Often theory guides expectations, which in turn function as a self-fulfilling prophecy. Selective attention plays an important role in this as the clinician will frequently look for confirming evidence, and ignore that which is not congruent. When we take as evidence of pathology illusionary correlations, and search only for confirming evidence, clinical judgment may lead to a false diagnosis.

Cognition plays an important role in mental illness. Consequently, reassessing what we think may serve to improve mental health. We have seen that excessive anxiety has negative consequences for many. The major reason for anxiety is our desire to make a good impression on others, and our fear of rejection. Negative thinking is related to depression. Depressed people emphasize the negative in their lives, and undervalue the positive. This distortion has both emotional and behavioral consequences. This works both ways. Negative feelings lead to depressed thinking, and negative cognition leads to depressed feelings. We often engage in self-defeating cognitive styles that work like vicious cycles producing self-blame, social inadequacy, and feelings of lack of control. On the other hand, optimism allows us to take control of our lives and helps us reverse the effects of negative thinking. Optimism helps improve both physical and mental health.




Being Human. Chapter 5: Attitude Formation And Behavior

There are many social issues that provoke public debate and engage people attitudes. Around these issues we can observe three components (beliefs, emotion, and behavior) of attitudes are activated. Global warming is an issue with profound implications for our survival and indeed the survival of all species and the planet. Recently former presidential candidate Al Gore received the Nobel Peace Prize for drawing the world’s attention to the dire prospects of our future unless we take decisive action. More and more public opinion (beliefs) is coming around and people are beginning to take serious the warning of the overwhelming majority of the world’s scientists. The beliefs of many common citizens are being modified to recognizing that things cannot go on as they have in the past, and that we must change. Some people have fully engaged their emotions as can be seen in letters to the editors of many newspapers and journals. These citizens feel the warnings at a very personal level and are not just willing to write letters, but also go on marches (behavior) in protest. Environmental beliefs are integrated for many people resulting in changed behavior where they take greater efforts to recycle, install energy saving devices in their homes, and drive more energy efficient cars. The world is changing, but is the rate of change sufficient to avoid future disasters. Only history will tell.

In the above vignette we can see various elements of attitudes and their effect on subsequent behavior, the important topics of this chapter. How did people form attitudes which brought them to the opposing sides of the global warming issue? Were their positions just fleeting opinions? Does the behavior of environmentalists who dissented from the indifference of politicians express more deeply held attitudes reflecting central values in their lives? Do those who express indifference toward environmental disaster hold more conformist attitudes that change with shifting popularity of viewpoints?

For people whose attitudes do not reflect deeply held values, attitude change can indeed occur rapidly. The popularity of president Bush has risen or fallen with dizzying speed. In the time before September 11, 2001, about 50 percent of the American people approved of his administration and leadership. This rose to 82 percent immediately following the attacks. However, by September of 2003 as the war continued to bring causalities, Bush’s popularity dropped back down again to 52 percent. As we write now in 2007, Bush’s popularity has fallen to an all time low. Obviously many who liked Bush in the past were “fair weather” supporters who have changed their views as the causalities and destruction have mounted in the months following the initial attack.

This vignette shows the importance of understanding the formation and structure of attitudes, and how attitudes may be changed. Attitude research is a central topic in social psychology from both the perspective of being salient to our concerns, and a topic we social psychologists started working on early in our history.

1. The structure and components
There is a common agreement among most social psychologists about the presence of three components in attitudes. The affective or emotional component we saw exhibited in the aforementioned vignette by manifestations of anger and contempt for the opposing sides. The second component, the cognitive factor refers to the beliefs that accompany the emotions, for example the newly discovered beliefs about the fragility of the environment. The third component, the behavioral, refers to the behaviors elicited by the affective and cognitive components. In our example attitudes may produce demonstrations for or against environmental policies, but may also be manifested in other behaviors such as participating in election campaigns, or in signing petitions.

Any attitude is composed of these three elements, and is always oriented positively or negatively toward some attitude object. Practically anything you can imagine might be an attitude object. You can have attitudes toward persons, ideas, or things. For example you may be positive or negative toward the leader of your country, a person, toward his policies (ideas), or toward inanimate objects (like posters or flags which symbolize viewpoints). In fact you can have an attitude toward the classroom in which you study. Look around and see if that is not true (Eagly & Chaiken, 1998; Fazio, 2000; McGuire, 1985)!

In general the three components are consistent with each other. A person, who has a positive attitude toward the environment, is also likely to have a set of beliefs that sustain this position, and may behave in a consistent manner. At election time the supporter may vote for environmental candidates, write letters to newspaper editors, or donate money to a favored candidate. Affect, cognition and behavior tend to move in the same direction toward the attitude object.

People may hold complex beliefs with respect to the attitude object, but the overall evaluation tends to be simple. One consequence of this apparent contradiction is that people may easily change certain beliefs, while still maintaining their basic evaluations. Many attitudes are like that, cognitively complex, but simple in terms of overall evaluations. These overall evaluations (positive or negative feelings) are more difficult to change than aspects of the supporting belief system. In the functional psychological economy of the individual, attitudes serve as primers. They make decision making more rapid by allowing for more or less automatic responses. Rapid decision-making is possible because the salient information is held in memory storage and is easily accessible to the person (Judd, Drake, Downing, & Krosnick, 1991; Sanbonmatsu & Fazio, 1990).

2. The formation of attitudes
Some researchers think attitudes have a genetic basis. Preston & De Waal (2002) found attitudes activating a certain branch of the motor cortex, which in turn supports certain behaviors. In other words our attitudes prepare us for action, and are in memory associated with other relevant emotions, beliefs, and behaviors. Tesser (1993) believed that at least some attitudes are linked to our genes. His study investigated identical twins that were raised in different environments and had no personal acquaintance with one another. These identical twins still had more attitudes in common than fraternal twins raised in the same home. In another study identical twins had more similar attitudes toward several attitude objects like the death penalty and music. How can that be? Are there gene behavior pathways that can be identified? These genetic pathways will probably not be discovered, as behavior is the consequence of many genes interacting with the environment. It would also appear more likely that genes affect broader personality characteristics like a person’s temperament, and these in turn affect more specific attitudes. However, while we must recognize a role for genes, the vast amount of attitude research in social psychology focuses on the social environment as primarily responsible for the formation of specific attitudes.

3. Which component dominates?
Some attitudes are formed primarily by cognitive experiences. A person’s attitude toward smoking may be a result of careful contemplations of convincing research that smoking causes cancer and death. Although the statistics for smoking behavior are dropping in some countries, they are alarmingly high in developing parts of the world like Asia. The World Health Organization expects that smoking may eventually kill 25 percent of all teenagers who start smoking in Asia, and a billion people will die from tobacco related diseases in the remaining 96 years of this century (Teeves, 2002). In just the United States smoking causes somewhere around 500,000 deaths each year. In addition to cancer, smoking may also cause impotence in males, and fertility problems in females. Some of these data have affected the cognitive component of attitudes toward smoking as half of the population in the United States smoked in 1950, whereas only 30 percent do so today. The cognitive component of attitudes includes all that we know about the attitude object, our beliefs, our memories, and images of the past. The cognitive component was predominant in affecting behavior for those who stopped smoking because they knew the research literature, and the effect of smoking on health

Some attitudes are predominantly affectively based, i.e. they involve emotional reactions to the object (Breckler, 1984; Zanna & Rempel, 1988; Bargh, Chaiken, Raymond, & Hymes, 1996). How much do we like smoking? Is it associated with pleasant images of friends or family, a ritual smoking session after dinner, and/or does nicotine produce pleasure associated with smoking. The fact that 30 percent of Americans still smoke would suggest that their attitudes are associated with emotional reactions to tobacco, along with cognitive defenses against the research that shows the negative effects.

For many people emotion is the primary determinant in attitudes toward a variety of objects. We have already noted how the popularity of political candidates is not stable, but frequently changes as a result of happenings in the larger world. How people feel toward a candidate is sometimes more important than what we think of his policies. In the US and probably other countries, people often vote as directed by their feelings, and often opt for policies which are contrary to their personal interests (Granberg & Brown, 1989). People still vote, although in decreasing numbers in the US, even when they know little about a party of choice or its policies. Political preferences are often based on some intuitive liking of the candidate or party, or based on family tradition.

Many attitudes simply express our basic value system, and have little to do with reason or facts (Maio & Olson, 1995; Schwartz, 1992). Some people have deep-seated values about the rights of the individual to self-destruct, and would reflexively vote against the control of cigarette smoking, or to place additional taxes on its sale. We could marshal much information about the negative effect of second hand smoke, and the need for additional taxes to cover the health hazards to smokers and others, but it would for some have no impact. This picture of intellectual indifference is not encouraging for those who believe in the advantages of democracy.

Some attitudes are based on our observation of our own behavior (Bem, 1972). Since we continue to smoke, so we reason, we must have a positive attitude toward smoking. This idea suggests that many people do not know how they feel or think about things until they have engaged in relevant behavior. You go to a beach for the first time, and come away feeling good, you observe this transformation in yourself and think “I have positive attitudes toward the coast”.

In the formation of our attitudes, different experiences may be more or less salient, and therefore some more easily accessible in memory. Some of these attitudes are cognitively related, and our memory therefore contains the necessary facts and experiences that sustain our predispositions. For other attitudes it is association with emotion that is significant. The pleasure of smoking, and the reinforcing role of peers and family, may provide rich emotional schemas that are difficult to change or remove. Finally, some attitudes are based on behavior. We have perhaps had direct experience with the consequence of smoking, lost a father or son, or we have personal health issues. These behavioral experiences may predominate in our attitudes toward smoking.

While a general consistency is present between the components of attitudes, there is no one-to-one relationship. In particular the relationship between attitudes and behavior is complex, as we shall see in a later section of this chapter.

4. Theories of attitude formation
Assuming that most attitudes are formed by experience, learning theory must play an important role in attitude formation. From this perspective attitudes are learned just like other habits (Hovland, Janis, & Kelley, 1953). We learn the information associated with an attitude object, and we likewise learn our feelings.

The most basic principle is learning by mere association. This idea emerged from classical conditioning theory. Two objects are presented together; one associated with affect the other neutral. Learning theory suggests that we learn our attitudes from similar associations over time. A young person tries his first cigarette and feels acceptance from his peers. Smoking therefore becomes associated with approval and acceptance from others (though not necessarily from family). Reinforcement theory has also been applied to the learning of attitudes. If a behavior is followed by some reinforcement, other similar behaviors are likely to follow. In operant conditioning we are free to chose the behavior, but whether is sticks or not depends on whether it is followed by some reward (reinforcement). Is our smoking behavior followed by peer approval? Then it is likely to become a habit, as the drug nicotine also has very addictive properties.

Social learning theory suggests that we can also learn attitudes by mere imitation of behaviors. People tend to imitate the behavior of models (see e.g., Larsen, Coleman, Forbes, & Johnson, 1972). When the models are deemed authorities with legal status or admired, we often imitate their attitudes. Children are likely to imitate the political attitudes of parents if the relationship is good (Abramson, Baker, & Caspi, 2002). However, if we seek to dominate the opinions of others, reactance theory may come into play, and children may adopt attitudes that are opposite to those of their parents. In adolescence children are more likely to look to their peers as role models, and react in opposition to parental admonitions. We will come back to this more extensively in chapter 7 on conformity.

The different theories of learning, whether classical conditioning, reinforcement or social learning, all have a role to play in the formation of attitudes. In the case of attitudes what do we learn? We learn a message about the attitude object. Is the message from peers that smoking is cool and acceptable? Then positive attitudes may develop toward smoking and the behavior will follow. The whole field on persuasion deals with whether and under what conditions messages will be accepted and acted upon (McGuire, 1985; Moser, 1992).

In addition we also learn from the association with objects toward which we already have feelings. This is called the transfer effect (Krosnick, Jussim, & Lynn, 1992). Many times we just transfer our feelings from one object to another. We like Al Gore, and therefore like his environmental policies and agree that his work should be honored with the Nobel Peace Prize. What is called transfer effect is just another example of classical conditioning, where a stimulus that initiates an emotional response is paired with one that is neutral. Eventually the neutral response elicits the same or similar emotional responses (Olson & Fazio, 2001). Attitudes, based on classical or operant conditioning, are for the most part not rational. Logic does not play a role, other than helping select from memory the information that supports the attitude. Behavioral based attitudes on the other hand do require reflection. “I see my behavior” so I must have an attitude as self-perception theory reasons do require some cognitive integration and evaluation.

5. Functional and social influence theories of attitude formation and change
Katz (1960), and Katz & Stotland (1959) proposed a functional theory of attitude formation. Attitudes are formed and expressed because they serve certain functions and respond to specific needs in the individual. The functional theory addresses the why of attitudes, why we develop these psychological constructs? Functional theory also has implications for attitude change. By understanding the underlying needs addressed by attitudes our messages can be persuasive.

5.1 The Instrumental-utilitarian, ego-defensive, value-expressive, and knowledge functions
According to the instrumental function we develop attitudes because they serve us in some practical way. Workers develop positive attitudes toward labor unions because they believe that the unions will promote their welfare and their rights. Some attitudes have a very practical basis. The utilitarian function suggests that we learn early which attitudes are likely to bring rewards, and which attitudes are followed by punishment. Hence, sometimes we choose to express attitudes because they are social desirable or “politically correct”. As practical creatures we seek to maximize our gains, and develop those attitudes that have assisted us in social adjustment.

The second function is ego defensive. This function explains that many attitudes are developed in response to our personal insecurities and in order to maintain a positive self-image. Ego defenses serve to suppress unpleasant reality. Some think that our personal insecurities motivate all forms of prejudice (see e.g. Katz, 1960; Adams, Wright, & Lohr, 1996). White males may develop negative attitudes toward minorities or women because these groups are perceived to threaten them at some level, and prejudice helps the bigoted person feel better about him or herself by not having to confront personal weak spots. The ego defensive function serves in a similar manner, by keeping away from awareness those unpleasant realities that cause anxiety.

The value-expressive function suggests that our attitudes give expression to our more deeply held values. The peace activists value peace, and therefore develop specific negative attitudes toward war. Values reflect our basic orientation toward the world. We can value justice and that might determine our specific attitudes toward labor unions working for fairness in the workplace, or civil rights organizations seeking to reduce prejudice in society.         Finally, the knowledge function is used to organize our reality and speed our decision-making. If we did not have an attitude toward products, we might spend endless time trying to decide which tooth paste to buy. Our knowledge based consumer attitudes derive from advertising in contemporary society. Consumer attitudes speed up the process of choice selection although the decision still might be mindless. Attitudes are formed because they serve basic functions as suggested by Katz (1960). Let us examine some of the research using his model as an outline. More contemporary researchers also recognize that attitudes serve basic psychological functions (Pratkanis, Breckler, & Greenwald, 1989).

5.2 Research on the instrumental-utilitarian function
Many attitudes are formed by our desire to obtain rewards and avoid punishments. We learn early that some aspects of our environment are rewarding and useful to us. We are likely to want to approach these objects with positive feelings. The teacher who rewards our efforts with excellent grades is more likely to be the object of our positive attitude, than those teachers who punish us for slovenly behavior. We are more likely to seek out a rewarding professor, use his assistance, and try to cultivate a relationship that may be beneficial in the long run.

Advertising employs similar means in utilizing persons and objects that have positive connotations, like using sexually alluring women to sell cars, or other consumer products. These advertising campaigns seek to associate a positively valued object with what is initially a neutral object. An attractive young lady (the positive object) is associated with a particular car. Car dealers hope that this association will also produce more positive attitudes toward the car, and therefore more sales.

Many other utilitarian attitudes are formed in a similar manner (Petty & Wegener, 1998; Pratkanis & Aronson, 2000). We learn to avoid objects because it helps in our survival. For example, we learn to avoid certain foods that contain toxins because often these foods leave a bitter taste. So our attitudes toward these foods also serve a utilitarian function (Profet, 1992). There are those who would maintain that even our preference for certain environments serve a utilitarian function. Most people have a preference for landscapes that include water, open space, with some uneven ground. These types of landscapes allowed our ancestors to hunt animals, obtain food and shelter, and avoid predators. Perhaps this nearly universal preference has served utilitarian functions in our distant past and may now be rooted in genetic based preferences (Orians & Heerwagen, 1999).

5.3 Research on the ego defensive function
Many attitudes are formed in response to personal insecurities and our need to avoid unpleasant facts about life and ourselves. The aim of ego defensive attitudes is to maintain a positive self-image and control our anxieties. Authoritarian attitudes were developed in response to fundamental insecurities in the individual, and therefore the willingness to submit to and value powerful significant others. Authoritarianism is of two kinds. Adorno, Frenkel-Brunswik, Levinson, & Sanford (1950) developed their theory of rightwing authoritarianism in an attempt to understand the holocaust. They believed that authoritarianism is a syndrome of attitudes and beliefs based largely on the content of rightwing worldviews as measured by the F (for fascism) scale. More recently Altemeyer (1988) has shown the continuous utility of the concept of right wing authoritarianism in the development of negative attitudes toward a bewildering set of victims including minorities. Rokeach (1960) developed his theory of dogmatism, in which closed mindedness and cognitive rigidity were essential components. Authoritarianism in Rokeach’s theory was independent of the content of beliefs, and is manifested in both right and leftwing politics. Dogmatism is also found in religion and other important social ideologies. For Rokeach, authoritarianism is a matter of either having a closed or open mind, and the rejection of others is based on belief incongruence. Both types of authoritarianism are thought to emerge out of personal insecurities (Larsen, 1969; Schwendiman & Larsen, 1970).

Research established links between authoritarianism and many forms of insecurity (Larsen, 1969). In one study (Schwendiman & Larsen (1970) birth order was found to be a factor in the authoritarian personality. Authoritarian traits were also predictive of the preference for presidential candidates in the 1968 election (Larsen, 1970) and the 1976 presidential election (Brant, Larsen, & Langenberg, 1978). Authoritarian attitudes also favored mandatory sterilization (Larsen, 1976). Likewise authoritarianism was related to negative white attitudes toward Aborigines in Australia (Larsen, 1978; Larsen, 1981), and found to be a component in general theories of prejudice and social judgment (Larsen, 1970a; Larsen, 1971c).

One interesting thought about the development of ego defensive attitudes is contained in the studies done on terror management (Arndt, Greenberg, & Cook, 2002; Greenberg, Pyszczynski, Solomon, Rosenblatt, Veeder, & Kirkland, 1990; Greenberg, Pyszczynski, Solomon, Simon, & Breus, 1994). These researchers suggest that all people face the existential dilemma of mortality. We all die, a thought you probably do not dwell on a great length. On the one hand, we seem to have a great desire for self-preservation, on the other hand we are aware of the certainty of death. This existential dilemma causes overwhelming anxiety that is expressed in a variety of attitudes. These attitudes function to protect us from the terror brought on by our unpleasant reality. Many attitudes are formed, these researchers think, to allow us some escape from our mortality. Some people believe that they will live after death, which in turn motivates attitudes toward a variety of religions. Religions, as we know, are supposed to reserve a place for us in the afterlife provided we follow certain prescriptions.

The main idea is that we are searching for something larger than our individual lives. Some feelings of permanence may also come from being part of groups or traditions with a long history. Traditions that are helpful in terror management include those of family, culture, and those found in the major religions. In contributing to these we may feel there is something that survives our individual lives, and makes our existence meaningful. Other people create literature or write books (like this book) in the search for some permanence or symbolic immortality. According to the theory of terror management, we manage our anxiety through a variety of attitudes that all serve the function of pushing out the thoughts of the impending doom. Our attitudes toward religion, culture, and literature, and our creative work, are all attempts to push away the fears associated with mortality. Perhaps drug and alcohol abuse, and reliance on recreational diversions serve similar functions. Sartre once said, “there is no escape” as we either face the existential anxiety associated with our mortality, or neurotic anxiety associated with our feeble attempts at escape. Many attitudes are undoubtedly formed as a result of the grand dilemma of life.

5.4 Research on the value function
Often attitudes are formed because they give expression to our underlying and deeply held values. Many attitudes are expressed in our support for our reference groups. Whether of a political, cultural, or religious nature, these groups matter to us, and help us identify our values and therefore are fundamental to specific attitudes. Parents obviously matter in the development of values, and therefore it should not surprise us that many children support the same political party as that of their parents (Niemi & Jennings, 1991). In general, conservative groups attract those who are committed to free enterprise, whereas liberal groups are more motivated by the values of equality (Hunter, 1991). The pioneering project that demonstrated the changing role of reference groups in attitude formation was the historical Bennington College study of student attitudes (Newcomb, 1958). The students’ parents were generally conservative in political beliefs and values, but the college was more left leaning. The question was which reference group’s values would prevail in developing the students lasting political attitudes?

As it turned out it was the college experience that was the more influential in forming lasting attitudes. The students’ initial conservative views changed over the course of staying in the college environment. A follow up study showed that these liberal attitudes held for the long run. Even 25 years later the majority continued to hold liberal views. Obviously parents were still a reference group, but as could be expected peers and the college environment had a powerful influence in the formation of more liberal attitudes. Perhaps this knowledge is the basis for the creation of many religious universities where students will not be confronted with ideas different from those of their parents.

5.5 Research on the knowledge function
As already mentioned our attitudes guide our behavior and thereby make our decisions more efficient. On the whole we tend to remember information that is consistent with our attitudes (Eagly & Chaiken, 1998). This has very broad implications for information processing. Our attitudes promote the selective use of memory and perception, and help us sort out the information which is consistent with our attitudes. We tend to think more highly of information that supports our attitudes. In a sense therefore, for many significant attitudes, our knowledge is highly selective and reflects mainly information that will not contradict our cherished views. We maintain positive self-images by remembering only those events that support this image (Greenwald, 1980). For example, we selectively interpret the behavior of minority groups to support our preexisting prejudices (Hamilton & Trolier, 1986). Many of our attitudes are formed in response to our need to cognitively organize the world in accordance with our worldviews and values.

6. The measurement of attitudes
Much of the preceding would make no sense unless we have ways of measuring attitudes formed in a variety of ways, and serving many functions. It would also be impossible to understand attitude change, except in some behavioral sense, unless we could use instruments to calculate any change over time. Although some attempts have been made at developing multidimensional scales, unidimensional scales are still the primary vehicles through which to study attitudes. Each of the four methods described below were invented to answer specific measurement problems.

One important issue in attitude measurement is unidimensionality. Does the attitude scale measure a single dimension and include statements that cover the range from very positive to very negative toward the attitude object? In other words out of the attitude universe of all possible statements about an attitude object, which items are “related” to one another, and fall along such a single dimension. Generally item analysis, correlating each item to the total test score, is used to find those items that correlate highest, and therefore contribute most to the attitude measured. Other methods can also be applied to determine unidimensionality, including assessments of overall reliability using alpha coefficients and factor analysis to examine the underlying structure of the scale items.

Reliability is another essential issue in scale construction. This concept addresses the issue of consistency. Will the results obtained by the scale be the same a month from now as in the original administration (test-retest method). Other forms of reliability are internal split-half reliability where we correlate the sum of the odd numbered items with the even numbered items of our survey. If reliability were high we would expect high correlations between the two halves of the scale. Split-half reliability employes the Spearman Brown prophecy formula to compensate for using only half of the items in the scale, as test reliability is related to the length of the test. In more recent years we have employed an estimate of overall intercorrelations of the items called the alpha coefficient.

Validity is a concept that refers to whether the scale measures what it purports to measure. If we are measuring attitudes toward nuclear weapons, is that what we really are measuring and not some other peripheral object? Validity can be measured by construct relationships asking whether the scale correlates in predictable ways with already established measures? It is also possible to use the scale in known group procedures. Can the scale discriminate the attitudes of two or more groups that are known a priori to have different attitudes? Are the mean differences significant and in the predicted direction?

Reproducibility is related to unidimensionality. It concerns the ability to reproduce responses on the scale knowing a respondent’s overall attitude score. If a person agrees with say a negative item, he should also agree with all the items that are less negative. The reproducibility coefficient is therefore also a measure of the unidimensionality of the scale.

6.1 The first start: the Bogardus scale
Bogardus (1925) can be credited with the first attempt to objectively determine attitudes by means of his social distance scale. In this scale he would ask the following: According to my first feeling-reaction, I would willingly admit members of each race (as a class, and not the best I have known, nor the worst members), to one or more of the classifications that I have circled.

This would then be followed with a listing of a variety of national and ethnic groups along the vertical axis, and the following descriptions along the horizontal: To close kinship by marriage (1); to my club as personal chums (2); to my street as neighbors (3); to employment in my occupation (4); to citizenship in my country (5); as visitors to my country (6); and would exclude from my country (7).

Essentially Bogardus sought to measure prejudice by examining the relative social distance the individual felt toward various groups. As can be observed it is a unidimensional scale of social distance, and therefore is useful in obtaining some overall idea of stereotypical prejudice in various populations. On the other hand we have no evidence of the scale’s reliability, nor does it assess the content of people’s attitudes. The social distance scale is useful in ordering groups of people. Social distance can be found for ethnic minorities in terms of their acceptability to the majority. The acceptability of the majority to the minority may also be determined by including it among several national groups.

6.2 Thurstone scaling
Thurstone and Chave (1929) responded to some of the measurement challenges by developing a scale of “equal appearing intervals”. This method requires first the development of a large number of statements representing different points along the unidimensional scale. Some items are formulated extremely positive, others moderately positive, some moderately negative, and some extremely negative. From this initial item pool Thurstone constructed the attitude universe by developing a scale of items with 11 points ranging from extremely positive to extremely negative toward the attitude object. A large pool perhaps 200 statements was edited in order to remove ambiguity (Edwards & Kenney, 1946; Edwards, 1957). Each of the 200 participants would go through a so-called judgment procedure. They read each individual item and placed it on the 11-point continuum according to its direction and intensity. From these judgments the experimenter determined where each item belonged on the continuum. First he calculated the median of responses for each item. The median is the point that divides the total number of judgments in half. Each item with a scale (median) value was subsequently placed at equidistant points along the continuum. Some statements were judged at point 1 on the scale, others 2, etc. Those items that did not fall at or close to one of the points on the scale were eliminated. At the end this resulted in about 80 plus items and so each point on the scale was represented by 7 or 8 items.

The remaining statements were subjected to a q-value analysis (see e.g. Blalock, 2006: 72-78). Q-values are the 75th percentile minus the 25th percentile, and are therefore a measure of the spread of the middle 50 percent of the judgments. Only the middle of the range of judgments is used, as the extremes are considered careless assessments. For example for an item having a scale value of 6, those who placed the item in categories 1 or 2, or 10 or 11, were either unable to do the judging task, or were careless judges. The larger the q-value result found, the less agreement among the judges on where to place the statement. Clearly, therefore, the q value is a measure of the ambiguity of the item, and the less ambiguous the better the agreement.

During the next step, the items within each of the 11 groups are then ordered according to the size of the q value, and two alternative items are defined from those with the lowest q values. To assess the reliability of the scale, we correlate the alternative forms. For validity we can use construct validity correlating our scale with established scales with known validity. Are the correlations significant and in the predicted direction? Criterion groups can also be used to see if the mean differences between groups known to have different attitudes are significant and in the predicted direction. If we are developing a scale on attitudes toward e.g. homosexuality, we might administer the scale to a gay rights group, and a conservative religious group. If the scale was valid, the gay rights group would be found to have significantly more positive attitudes when compared to the conservative group. Commonly, each form of the scale would have 22 statements, two for each point of the scale.

The scale is then ready for use. The respondents would indicate agreement with those items that correspond to their attitude, and the attitude score would be the summation of the scale values of all the items with which they agree. Although the Thurstone scale provides us with a unidimensional scale, and may have satisfactory reliability and validity, it is also a very time consuming method. Would it be possible to develop a scaling method that has comparable reliability and validity, but is less cumbersome?

6.3 The Likert scale
The Likert (1932) method responds to this concern and has been found to correlate highly with Thurstone scales suggesting they measure the same domains (Oppenheim, 1966). At the same time the Likert method is much less laborious in development. Recall that in Thurstone we asked the respondents to judge each item according to its place on the 11-point continuum. In the Likert method we ask people to base their judgments on their own attitudes. For Thurstone we asked for objective judgments as to where the item belonged whereas for the Likert method we ask for agreement or disagreement with the item presented.

As with Thurstone, we start with a large number of statements that reflect the attitude universe of interest. These statements are then edited according to Edwards’ (1957) a priori criteria to remove ambiguity. These criteria demand that statements should be simple not complex, should be short rarely exceeding 20 words, should refer to a single object not several, and so forth. After editing the statements they are placed in a survey in random order. Since about half are written as negative toward the object, and the other half as positive, it is important to maintain random order to avoid response biases. The response categories are typically five from agree strongly (5), agree (4), uncertain (3), disagree (2), and disagree strongly (1). Each of the weights are then summed up across the item pool but only after the weights for the negatively keyed items are reversed to ensure that the overall score is representative of the item pool and all the items are scored in the same direction.

A further effort to eliminate items that are ambiguous or do not contribute to the attitude is carried out by means of item analysis (part-whole correlations), or alpha coefficients. The resulting scale may have 20 to 30 items, approximately half of which are positive, and half negative. The scale is then submitted to a sample, and split- half and/or alpha correlations are calculated to ascertain scale reliability. Assessing validity is done with either construct coefficients, or by using known groups to predict mean differences.

The advantage of both Thurstone and the Likert methods over Bogardus is that both tell us something about the content of peoples’ attitudes. The advantage of the Likert method over Thurstone is that it is much easier to develop. Neither method, however, addresses the problem of reproducibility. The same overall score can be obtained in several ways, and so we do not have a direct way to assess unidimensionality. This was the contribution of Guttman & Suchman (1947).

6.4 Guttman and Mokken scaling
The Guttman scale was developed to address the problem of reproducibility and unidimensionality. Does the scale you have developed represent an ordinal set of items that fall along a single dimension? Do these items form a cumulative scale, so if we know the respondent’s overall score we also know all the items to which he would agree on a perfect scale? Given that scales are not perfect Guttman developed a coefficient of reproducibility to determine whether the scale meets minimal criteria, usually a coefficient of .90. If the Guttman procedure is applied to a Thurstone scale, we will know exactly from the respondent’s scale score, with which items the respondent has agreed, and with which items he/she has disagreed. The coefficient of reproducibility is an estimate of how close the scale comes to reproducibility in an imperfect scale, and is found with the following formula: R= 1-Number of errors/number of responses, where the number of errors is deviations away from perfect reproducibility.

The Mokken Scale Procedure (MSP) computes a measure of scalability (Loevinger’s H) for each single item and for a set of items. In general, an item is considered a part of a cumulative scale if it reaches or surpasses a value of .30. The analysis can be employed to dichotomous scales like Thurstone’s agree or disagree format (Mokken, 1991), or to polychotomous items like the five point Likert scale (Sijtsma & Molenaar, 1996) and is essentially a probabilistic version of Guttman scale analysis (Dunn-Rankin, Knezek, Wallace, & Zhang, 2004). As a result of MSP the resulting scale items are ranked according to their ‘difficulty’ (the average percentage of agreement with the item). The lower the average agreement, the more ‘difficult’ the item, and the more amount of the attitude is needed to agree with it.

7. Some contemporary examples of measures and attitudes
Attitude scales have been developed in order to study a variety of social topics. For example, attributed power (Larsen & Minton, 1971); integration (Larsen, 1974); women’s liberation (Larsen, Cary, Chaplin, Deane, Green, Hyde, & Zuleger, 1976); attitudes toward homosexuality (Larsen, Reed, & Hoffman, 1980); toward rape (Larsen, 1988); toward aids victims (Larsen, 1990); and toward illegal immigration (Ommundsen & Larsen, 1997; Ommundsen & Larsen, 1999; Ommundsen, Hak, Mørch, Larsen, & Van der Veer, 2002; Van der Veer, Ommundsen, Larsen, Van Le, Krumov, Pernice, & Romans, 2004; Van der Veer, Ommundsen, Larsen, Krumov, & Van Le, 2007; Ommundsen, Van der Veer, Larsen, Krumov, & Van Le, 2007). Scales offer an opportunity to establish the reliability, the validity, and the content of attitudes. These are the major advantages of scales over single item surveys. Single item surveys are furthermore often confounded by the wording of a statement. Slight changes in the wording can create widely discrepant results, and confound the evaluation and significance of the attitude. Where possible, therefore, the researcher should use the Likert method for developing a scale, and check its unidimensionality by applying e.g. the Mokken analysis to the results.

8. Explicit and implicit attitudes
Attitudes can be present either explicitly or implicitly. Explicit attitudes are those we know exist within ourselves, of which we are conscious, and about which we can report. Explicit attitudes produce rapid responses to the attitude object. We could ask a question like “what do you think about women’s liberation”, and most women would have an explicit attitude toward that topic.

Some attitudes are implicit, we are hardly aware of them (Fazio & Olson, 2003; Wilson, Lindsey, & Schooler, 2000). We might endorse very progressive views on tolerance toward other groups in our society while maintaining feelings of discomfort toward these groups. The former is our explicit attitude that we present to the world, the latter are our implicit predispositions (Dovidio, Kawakami, & Gaertner, 2002). We are only now beginning to understand the conceptual difference between explicit and implicit attitudes, but it is important to know that psychologically speaking our attitudes can be split. At one level they are explicit and conscious, but at another more unconscious level, we may hold attitudes that are very different (Greenwald, McGhee, & Schwartz, 1998; Greenwald & Nosek, 2001). We should keep this difference in mind since the research reviewed in this chapter is based on explicit attitudes.

9. Attitudes as predictors of behavior
In the early history of social psychology, scholars were confronted with a study that caused great concern. It showed that attitudes had apparently little to do with behavior. LaPiere (1934) spent two years traveling around the U.S. with a young Chinese couple visiting hotels, camping grounds and restaurants. Out of the 251 establishments they visited, they were only denied service at one establishment. This surprised LaPiere, as there were strong negative prejudices toward Asians and Chinese in the U.S. Many of these negative views were based on stereotypes of Chinese laborers brought in to build the railroads or to run laundry services in the cities. Most people in fact had not had any personal experience with Chinese so as to form affect-based attitudes.

After these visits, LaPiere wrote to all 251 establishments and asked for their policies with regard to “Orientals”. Of the 128 that replied, 92 percent wrote back to say it was against their policy to serve people from Asia, a result totally opposite to what LaPiere had actually experienced. As only one establishment said to welcome Asians, LaPiere’s study suggested that while negative stereotypes were strong, evidently they did not predict behavior. This study is always cited to indicate the lack of correspondence between behavior and attitudes. Other studies in the following decades came up with similar discrepancies, and led some to believe that there were no stable underlying attitudes which determined verbal reactions or behavior (Wicker, 1969).

During the last decades there have been done several meta-analyses concerning the relationship between attitudes and behavior (see Glasman & Albarracin, 2006 for an overview). Eckes and Six (1994) examined the influence of measurement correspondence, time interval between attitude and behavior measures, number of behavior alternatives, and behavioral domain. They investigated the results of 501 studies, published in 59 journals between 1920 and 1990. They found the highest mean correlation between behavior and behavioral intention was (r=.54) and the lowest between attitude and behavior (r=.49). Hence they found some moderators in the relationship between attitude and behavior. The number of behavior alternatives (in case of two alternatives the correlation is obviously higher than in case more alternatives are available) and the way of measuring behavior (in case of self-report the correlation is much higher than with objective measurement) are examples of such moderators. Also the domain matters very much. The correlation between attitude and behavior (objectively measured) is high when it concerns the domain of political participation (r=.68) and low when it concerns the domain of altruism (r=.20). However, these results still leave much open about what might cause discrepancies between attitude and behavior.

These attitude-behavior inconsistency results came at a time when researchers also found that personality traits failed to predict behavior. Many asked whether there was a total disconnection between what people said and what they did, and if attitudes really did not determine anything?

To assess this question it is important to understand what really took place in the LaPiere study. LaPiere traveled through the country with a well dressed, and attractive Chinese couple. The couple did not fit the stereotype of the white prejudicial mind. Therefore, when faced with this couple, most establishments could not react stereotypically when confronted with this situation. In responding to the request for service the immediate situation overpowered any stereotypes guiding their thinking. In fact, LaPiere did not study affect-based attitudes, but rather stereotypes that only elicit behavior in combination with social support. Behavior is not only determined by attitudes, and attitudes can hence not predict behavior.

10. Other influences that compete with attitudes and cause attitude behavior inconsistency
Human beings are complex and our behavior, our attitude, and the relationship between behavior and attitude are the result of many factors. Social psychologists have counted up to 40 different factors that may influence the relationship between attitudes and behavior (Triandis, 1982; Kraus, 1991). A major determinant of inconsistency between the two is social desirability. We often hide our views from others for fear that they will not be acceptable. Our fear of rejection or experiencing other forms of punishment cause us to moderate our responses. We do not always tell truth to power, because power may not like to hear what we have to say, and consequences can be painful. We may not tell others of our alcohol or drug use, because of the shame associated with these behaviors, so researchers have to use alternative ways to get to the truth (Roese & Jamieson, 1991).

10.1 Attitudes may compete with other determinants of behavior
Any behavior is a consequence of many competing factors, including what we saw as situational pressures in the LaPiere study. As we face decisions in any given situation, we must remember both our explicit attitudes and the situation confronting us. For example, religious attitudes are poor predictors of church attendance. What are the competing factors that affect people who are religious so they do not attend religious services? Perhaps they are religious, but their family or friends are not, and pressure you to not attend. Maybe they have to work when religious services are performed. For any behavior, we can think of similar reasons for the lack of attitude-behavior consistency. At least at the short-term, when we examine religious behaviors over time, then attitudes predict behaviors quiet well. Therefore we have to examine long- term effects, and average behaviors, rather than individual acts to determine attitude-behavior consistency (Fishbein & Ajzen, 1974; Kahle & Berman, 1979).

10.2 Attitudes specific to the behavior
Many of the early studies tried to establish relationships between general attitudes, and very specific behaviors. For example, in LaPiere’s study the request for service involved a very specific decision regarding a well-dressed Chinese couple that did not fit the prejudicial stereotype. The question measuring “attitudes” in the post meeting survey was a very general question referring to “Orientals”. Indeed where studied, general attitudes do not predict specific behaviors (Ajzen & Fishbein, 1977; Ajzen, 1982). However, where the measured attitude is directly relevant to the situation, attitudes do predict behavior. For example, general attitudes toward the environment do not predict recycling behavior, but attitudes toward recycling do (Oskamp, 1991). To establish the true relationship of attitudes to behavior we must measure attitudes that are specific to the behavior being studied. In one study women were asked about their attitudes toward birth control (Davidson & Jaccard, 1979). The survey included both very general questions like what they thought in general about birth control, but also specific questions such as what they thought about using birth control pills. The researchers waited two years before again contacting the women. The results showed that the general questions did not relate to behavior. Again this result most likely occurred because the general attitude question measured only stereotypic responses to which the individual had little emotional commitment. On the other hand specific questions about birth control pills did strongly predict their subsequent use. The lesson learned: we must measure attitudes toward specific behaviors to obtain good behavior-attitude consistency.

Broader social attitude studies are also useful as they provide information on widespread beliefs serving as the social context of behavior (Fraser & Gaskell, 1990). Broad social attitudes provide a framework that identifies the content of beliefs and feelings, without which we cannot ask the specific questions, or determine need for attitude change. Attitude scales that broadly define attitudes are also important for the development of theories in social psychology. They describe how variables correlate, and in what direction. These attitude and behavioral relationships can help us understand the stereotypic norms of society that control behaviors that are not obvious. We suspect that voting behavior in the US and the Western world is often just based on feelings of liking in turn produced by stereotypical advertisement by political parties. As we can see, broad or general attitudes can be of great significance with consequences for both the individual and society. However, broad attitude measurement must show fidelity to the object being measured and demonstrate validity at least from the point of construct assurance. General attitudes predict general behaviors. There must be a match between the attitude measured and the predicted behavior.

So, regardless whether the attitude measured is considered broad or specific, attitudes predict best when both the attitude scale and behavior are at the same level of specificity. Scales that are highly specific do a better job at predicting highly specific behavior; those that are general or broad do a better job in predicting broad behaviors (Ajzen, 1987). Remember, in the survey on attitudes toward birth control only those questions that asked specifically about attitudes toward the use of birth control pills (not birth control in general) predicted the use of pills subsequently (Davidson & Jaccard, 1979). In the LaPiere study, if the respondents had been asked, “will you serve a well dressed Chinese couple that is fluent in English”, perhaps the results would have been very different.

10.3 Other sources for behavior-attitude inconsistency
Not all attitude components are consistent. It happens at times that we have feelings of dislike and yet think positively about the target person or issue. In several studies, students rated their attitudes toward participating in psychological experiments. Some felt positive, but did not think it would help them in any way; others felt positive and thought it might help their grades or their other academic goals. Those who had consistent attitudes and were positive in both feelings and thought were more likely to participate in the experiments (Chaiken & Baldwin, 1981).

Some attitudes we learn second hand from our educational system or other cultural institutions. Remember the inconsistency in the LaPiere study! This might well have occurred because the stereotypes then prominent in American society were not based on actual encounters with Asian people, but learned second hand through the biased widespread beliefs in society. It should therefore be no surprise that attitudes based on real life encounters are more salient and powerful predictors of a person’s behavior. The effect of personal experience has been demonstrated in several experiments. Regan & Fazio (1977) compared student attitudes toward university housing shortage. One group consisted of those who were made personally uncomfortable as a consequence of the crisis by having to stay in emergency or temporary housing. Another group consisted of those who had read or otherwise heard about the crisis. Students who had actually experienced the crisis first hand were more likely to engage in relevant behaviors such as signing petitions, when compared to those whose attitudes were second hand. These results have been confirmed in other studies (Fazio & Zanna, 1978; Davidson, Yantis, Norwood, & Montana, 1985).

10.4 Accessible attitudes
Sometimes we are asked to respond immediately to a situation, and if our attitude is accessible, we can make rapid responses. Recently the first author was approached to sign a petition to put on the next election ballot a proposal for universal health care in the state of Oregon. This is an issue toward which he is very sympathetic, and it took him little time to agree and sign the petition. Some salient attitudes produce very rapid and spontaneous responses; they are very accessible in our minds. Other issues are of less concern. He had few opinions on the make or models of cars to buy. Only after buying a car did he develop an attitude toward the purchased car, but previous to his purchase his attitudes were not readily accessible. A study on consumer behavior demonstrated this effect (Fazio, Powell, & Williams, 1989; Fazio, 2000). The participants rated various consumer products, and accessibility was determined by the time it took to respond to a particular product. In this study only if attitudes came quickly to mind were they related to actual behavior.

10.5 Automatic attitudes
Some attitudes function more or less automatically (remember the discussion on automatic thinking in chapter 4). Sometimes a word or image may activate an attitude and make it accessible. In that situation we do not take the time to evaluate the positive or negative of the proposed behavior, we simply act. Support for the presence of automatic attitudes is found in several studies (Bargh, Chen, & Burrows, 1996; Dijksterhuis & Van Knippenberg, 1998). In a sense these behaviors are so automatic that they bypass our conscious attitudes.

10.6 How do attitudes predict behavior?
As we can see from the previous discussion, attitudes compete with many influences in determining behavior. Many of us do not act purely on our attitudes, but are influenced by what we think is appropriate or normative behavior. Ajzen & Fishbein (1980) proposed a theory of reasoned action. It assumes that people consciously choose to behave in certain ways depending on both their attitudes plus their understanding of the norms regarding appropriate behavior, or what the researchers called subjective norms. Attitudes together with relevant subjective norms produce behavioral intentions that in turn predict behavior. In a study on breast-feeding, attitudes together with subjective norms (e.g. what the mother in-law thought of breast feeding) best predicted the actual behavior (Manstead, Profitt, & Smart, 1983).

Later Ajzen (1985, 1996) proposed a theory of planned behavior. In addition to attitudes and subjective norms, Ajzen proposed the variable of perceived behavioral control. Did the participant believe they could perform the behavior? If not, the attitude and norms would have little effect. Several studies have found support for this expanded theory in a variety of behaviors including dieting (Ajzen & Madden, 1986; Sheeran & Taylor, 1999).

10.7 Some conclusions on behavior-attitude consistency
The aforementioned research supports several conclusions. If we are dealing with specific behaviors, then attitudes toward these behaviors, subjective norms, and perceived behavioral control, may increase our ability to predict the behavior. Examples of predictable behaviors include the use of seat belts in cars, and the use of condoms when having sex (Albarracin, Johnson, Fishbein, & Muellerleile, 2001; Armitage & Conner, 2001). Prompting people’s attitudes may also increase consistency (Zanna, Olson, & Fazio, 1981), and anything that increases self-awareness of attitudes may also contribute the predictability of attitudes (Gibbons, 1978; Diener & Wallbom, 1976).

11. Why do attitudes follow behavior?
We know that sales people change customer attitudes by the foot-in-the-door technique. If people agree to perform behaviors that are not too demanding, they are more likely to consent to the larger requests that follow. In the Freedman & Fraser (1966) study, the researchers initially asked for a small favor, placing a three-inch sign about traffic safety in their windows. When these participants were approached three weeks later and asked to place a crudely made and ugly sign on their front lawns, 76 percent agreed, as compared to 17 percent from a group that had not been previously approached. What happened? Apparently, behaving in a small way favoring traffic safety changed their attitudes in more significant ways. So attitudes do follow behavior!

Other studies showed similar patterns. People willing to wear a small pin to support cancer research were compared to another group not asked to wear the pin. The group that agreed to wear the pin were later more likely to contribute money to cancer research. Voters who said yes when asked if they intended to vote were 41 percent more likely to actually vote compared to a control group not asked the question (Greenwald, Carnot, Beach, & Young, 1987). These studies show that responding to a small request, behaving in small and apparently insignificant ways, causes broader changes in attitudes. After the initial non-demanding behavior the individual responds to larger requests. The individual would not have agreed to the demanding request without the prior behavioral commitment.

The roles people play affect their attitudes. Individuals raised to supervisory status change their attitudes substantially as a consequence. Research shows that these previous workers become more sympathetic to management positions in their new roles. Called upon to perform a new role, attitudes changed to be consistent with new expectations (Lieberman, 1956). When people act in their roles, attitudes follow. We seem to believe our behavior. Military people quickly adopt military attitudes. Although they are the ones who suffer most in wartime, they typically hold the most pro war attitudes, because how else can they justify the risks that they and their comrades take. Attitudes are formed as a result of the roles we play in society. Whether we are students or teachers, we develop attitudes consistent with our roles. Eventually the individual becomes incapable of distinguishing between his role and his personal behaviors as they become one and the same.

In a similar way, when our roles or social situations compel us to say something, we eventually come to believe what we say. Most of us are aware of common attitudes, social taboos, and norms, and we adjust our speech accordingly. We try to speak in ways that please the listener (Tetlock, 1981), and tend to adjust our communications toward what we believe is the listener’s position (Manis, Cornell, & Moore, 1974; Tetlock, 1984). Eventually, saying something becomes believing, and our attitudes become consistent with our talk. We form our language toward our listener’s perceived position and come subsequently to believe the new message. Inconsistency between talk and attitudes would create too much dissonance for most people.

We can observe appalling consequences in wartime. Aided by official propaganda, soldiers often develop callous and inhuman attitudes toward their supposed enemy. Normal people justify immoral acts by devaluing the supposed enemy, and by increasing social distance. Those who commit genocide are often normal decent human beings in civilian life, but come out of war theaters with cynical attitudes toward human life. During slavery, common people accepted the morality of other people being held in involuntary bondage. During the American war on Vietnam, soldiers described the Vietnamese as “gooks” thereby dehumanizing the “enemy”, and justifying their behavior.

This inconsistency-reduction does not always last. Veterans in the United States have since the war dealt with issues of delayed stress syndrome. One theory is that soldiers participated in horrible events, but these were inconsistent with more deeply held values. The inconsistency was suppressed for many years, but typically at great psychological cost to the individual. For some at least, the evil acts produced more cynical attitudes, and their conscience came back to haunt the individual many years after the behavior.

That attitude follows behavior can also be observed in political movements in their manipulations of populations. In Nazi Germany we saw the people participating in a variety of behaviors supporting the regime. Mass rallies with hypnotic martial music, parades using flags and other national symbols, the German salute of the raised arm, all of these behaviors were powerful conditioning devices. The seductive behavior changed German attitudes to the point that only few opposed, and even fewer spoke out against the Nazi’s. Probably all societies have similar conditioning rituals, and politicians use these to win support for policies and political goals. That is certainly true in the Western world. For example in the U.S., school children are often required to say a pledge of allegiance to the state, sing the national anthem, and salute the flag at all school events. Other countries like the Netherlands and Norway may use different and less strong conditioning to obtain compliance with minimal social objectives. These are all attempts to use public conformity to inculcate broader attitudes toward “patriotism”.

Although many say, “you cannot legislate morals”, in fact the evidence shows the opposite. We can encourage normative behavior, and often attitude change follows. If we, for example, examine attitude changes in the southern United States toward Blacks we see huge changes as a result of legislative and other legitimate action enforcing laws on racial equality (Larsen, 1971). Tolerance seems to follow laws that enforce tolerance and equal treatment. We also have evidence that when we act positively toward someone it increases liking of that person. Further, if we do a favor for someone it increases liking for the person we have benefited (Blanchards & Cook, 1976).

12. Theories of why attitudes follow behavior
In the previous discussion we have alluded to why attitudes follow behavior. Let us now discuss the major theories developed in social psychology to explain the behavior-attitude consistence. These include Cognitive Dissonance theory which suggests that consistency derives from psychological discomfort of dissonance; Self-perception theory which states that we look to our behavior to understand our attitudes; Self-presentation-theory proposing that attitudes reflect image management and our desire to appear consistent to others; and Expectancy-value theory which indicates that attitudes are formed in a process of weighing the pro’s and con’s of our predispositions.

Theories of cognitive consistency
What explanations can we offer for why, over time, our outward behavior gives way to deeply felt convictions. How is it that people try to make their attitudes consistent with their behaviors? As will be seen, the following theories are essentially theories of rationalizations as the individual tries to understand his attitudes by the experiences that follow from situations and the environment.

Balance theory
Heider (1946) was the first to develop a psychological balance theory. He contended that people seek to maintain a balance between their beliefs, “sentiments”, and other people. Heider posited that balance existed in triads consisting of the person (P), another person (O), and some object (X). For each of the three components of the triad it is possible to envision a positive or negative relationship. The two people may like each other, be friends, but they may like the object or not. If John likes Peter, but does not like Peter’s political views, something has to give. John can, for example, change his opinion of Peter and like him less then the relationship is in balance since John’s negative views of Peter correspond to his negative views of Peter’s political opinions. John can also evaluate his political opinions, and come to realize that Peter is right in holding these. Now we are, according to Heider, in balance again as the positive attitude toward Peter corresponds to the new positive attitude toward Peter’s political opinion. Some researchers have supported balance theory in that people are more favorable toward and remember balance relationships better than those not balanced (Hummert, Crockett, & Kemper, 1990; Insko, 1984).

Cognitive dissonance theory
Heider’s theory was seen by many as too limiting in evaluating the complexity of behavior, since it dealt with only triads. Festinger (1957) followed with his theory of cognitive dissonance that dealt with cognitive balance within one person. In a way similar to Heider, Festinger argued that people do not like imbalance in thought or relationships, and will behave in ways to restore balance. He contended that people in dissonance experienced unpleasant feelings that in turn motivated the change of either beliefs or behavior to remove the dissonance. The unpleasant feelings motivate us to change something in ourselves or in the environment. Although vague, Festinger maintained that dissonance occurs when a person experiences the “opposite” of a given belief or cognition. Put in another way, we feel unpleasant tension occur when two beliefs or thoughts are not psychologically consistent. They somehow do not fit or are incompatible.

You like smoking and feel positive toward this social habit, but you have learned you might die early if you continue. What to do? You could stop smoking, and then your behavior would be in consonant with your beliefs. Smoking causes addiction though, so some may find quitting difficult. Dissonance theory would suggest that when we feel the inconsistency we would also feel the pressure to change our beliefs and /or feelings. In a British survey (Eiser, Sutton, & Wober, 1979) smokers were in denial. They resolved the dissonance between desire and health by disagreeing with the assertion that smoking is dangerous. The dangers of smoking had been exaggerated the addicted seemed to say. Some smokers would argue that they knew people who smoked every day of their adult lives and yet lived to see a hundred years. Smoker’s rationalized their behavior and tried to find good reasons to continue the habit. Rationalizations reduce dissonance if they are sincerely believed. Do you think many smokers truly believe in their dissonance reduction efforts?

12.1 Reducing dissonance in our lives
We often reduce dissonance after making important decisions by selectively finding reasons to support our choice. In similar ways we find reasons to downgrade the not chosen alternative. We constantly try to assure ourselves that we have displayed wisdom in our choices. Any decision that is important creates some dissonance (Brehm, 1956), and we therefore usually change some cognition. For example, you bought a new car, but had doubts about the wisdom of the purchase. To remove the dissonance, you looked for information that permitted you to rationalize your decision. Some advertising, for example, showed that the car is highly ranked in consumer satisfaction. In addition the car has many surprising and delightful features that pleases you, so now you are a happy costumer and your dissonance is removed.

Many experiments show this tendency for customers to rationalize their decisions (Knox & Inkster, 1968). The aforementioned study showed that people’s confidence in a horse bet on at the racetrack increased after the purchase of a betting ticket. On the way to the betting counter gamblers were unsure, feeling the dissonance of the impending decision: would the horse run as they hoped? However, after the purchase the bettors expressed great confidence in their choice. Making difficult decisions triggers uncertainty, produces dissonance and activates the rationalization process. This includes also behavior before and after voting (Regan & Kilduff, 1988). Recent research shows that the rationalization process may even begin before the decision is taken to minimize any resulting dissonance (Wilson, Wheatley, Kurtz, Dunn, & Gilbert, 2004). Dissonance reduction does not necessarily occur at a conscious level. As soon as we have subconsciously made a decision, we selectively evaluate and seek out supporting information in order to justify our decision (Brownstein, 2003; Simon, Krawczyk, & Holyoak, 2004).

In many cases, we make decisions that involve substantial effort, but are nevertheless disappointing in their outcomes. We can reduce the dissonance by justifying to ourselves that the effort was after all worthwhile. For example, students participating in an experiment were led to believe that it would be exciting and deal with sexual topics. Some had to go through a severe screening test, whereas the control group only listened to a few suggestive words about sexual behavior. What followed was a boring discussion on the sex life of invertebrates. The experimental group (who had to endure the screening to participate) experienced a large amount of dissonance between expectations and the actual event. What did the students do? Those in the dissonance group spent a great deal of time convincing themselves that the session was not so boring after all, that much useful information was imparted (Aronson & Mills, 1959). Useless bogus therapy brought about a similar dissonance reduction effort (Cooper, 1980).

Reevaluation pressures are especially strong when we choose between alternatives that seem more or less equally attractive (Brehm, 1956). The tendency to favor the chosen alternative increases when people are at the point of implementing the decision. This pattern indicates that the favorable reevaluation is a part of the decision making process (Harmon-Jones & Harmon-Jones, 2002). Some of the most dramatic reevaluations have occurred in cases where prophecy fails (Festinger, Riecken, & Schachter, 1956). A doomsday group had predicted the end of the world on a specific day. When the day arrived without the expected destruction, the group was initially chagrined. Soon, however, they responded to the dissonance with renewed energy as they busily engaged in recruiting new supporters. Did the attempt to convert others help reduce their own dissonance? Common sense would tell us that the group would just pack it in, and accept that their beliefs were absurd. Instead they performed as dissonance theory would predict and reduced dissonance by new explanations and active recruitment of new believers.

12.2 Counter attitudinal acts and dissonance
Many people have had the unpleasant experience of acting contrary to their attitudes. Perhaps the boss asked you to work on holy days when it would be against your beliefs or plans for the weekend to work. When a person engages in such attitude discrepant behavior, it is predictably followed by dissonance. Most people resolve these unpleasant feelings by readjusting the attitude. Perhaps it was not so bad to work on the proscribed days! After all I was paid to do it, and my standing with the company improved, they may reason. Similar rationalizations can be found for practically any behavior that runs contrary to a person’s original attitudes. Those who do not believe in premarital sex, but engage in the behavior, justify it by saying they are really in love, or it feels good so how could it be wrong? Any dissonance produced can be reduced by an overwhelming new array of beliefs that support the behavior.

If called upon to perform a counter attitudinal act, dissonance depends on the level of the incentive for the behavior. There has to be some justification or minimal incentive to engage in the behavior. The true believer who works on holy days because he wants the extra pay might feel dissonance. However, if the boss pays triple wages, gives alternative days off, and promotes the individual as a consequence, dissonance theory would predict little tension. We minimize dissonance when we have many good reasons for discrepant behavior. Dissonance was created in a study on whether communist speakers should be permitted at U.S. university campuses. Those who were paid little to participate in the study, changed their attitudes more compared to those paid more (Linder, Cooper, & Jones, 1967). For real attitude change there has to be some incentive, but not too much so the individual feels sufficiently compensated by the incentive.

Dissonance depends on whether we feel we have a choice. When we behave in ways contrary to our beliefs, but we feel we have little choice, the resulting behavior should cause little tension. If employment is necessary for survival, then working on days contrary to beliefs would probably be justified by most people. Along with feelings of choice, the commitment to the decision also matters. If we feel commitment to working on holy days despite our moral objection, and when we feel our behavior will not be altered, then less dissonance is experienced (Jonas, Schulz-Hardt, Dieter, & Thelen, 2001).

Some dissonant behaviors do not require much effort. Driving faster than the law allows may be contrary to a person’s better sense, but it only requires a heavy foot and is not likely to produce much dissonance. However, if you are stopped by the police and have to pay a heavy fine, that is likely to produce dissonance. When people can foresee the possible negative consequences of the decisions, dissonance is increased. If you also had to work very hard, expend a great deal of effort to pay the fine, you are likely to experience even more dissonance. If a decision is felt as important, we feel more personal responsibility for the outcome. Therefore, if the outcome is negative, we feel more dissonance. We feel bound to reevaluate our attitudes when outcomes are negative, and we feel responsible (Scher & Cooper, 1989).

Other findings suggest that the dissonance increases when the behavior is relevant to our self-conception. If the behavior undermines our feelings of competence or morality, dissonance follows as attitudes change (Steele, 1988). This is especially true for people with high self-esteem as for these people a threat to competence will be felt as more dissonant requiring attitude change (Stone, 2003).

The conclusion is that dissonance and therefore attitude change results from a number of factors. These include limited incentives for the behavior (one cannot excuse it by the many rewards that come from performing it). We also have to feel we have some choice in the matter, and an unchanging commitment to the inconsistent behavior. We also experience more dissonance when we can foresee the consequences, and put great effort into the self-relevant behavior. Under these conditions, dissonance is likely to occur and attitude change follows.

12.3 Attitude change following compliance
When people are seduced or compelled to behave in ways that are inconsistent with their beliefs and values, dissonance follows. One could repent and give up the inconsistent behavior. However, the easier and therefore more likely path is to change or readjust attitudes. Festinger & Carlsmith (1959) demonstrated this effect when they asked the participants to engage in what can only be called experimental drudgery in a psychological experiment. Those who participated were sent directly for debriefing, and of course reported being bored by the experiment. In the experimental conditions the participants were told that the experiment was about how people’s performance was influenced by their prior expectations. As part of the deception, these true experimental participants were informed that they were in the “control” condition, and they were asked to tell the next participants (confederates of the experimenter) about the experiment. Since the experimenter’s confederate was absent would they (the true participants) tell the next subject how exciting the experiment was? Some of the participants were offered a dollar to participate in the study, other subjects were offered 20 dollars. This experiment was carried out in the days when a dollar would pay for the admission to a movie, but one dollar was not enough to make participants willing to lie, the experimenter reasoned. Being given $20 was, however, a significant amount, and therefore the individual would feel less dissonance in lying as he/she would feel some compensation and justification by telling the next person that the experiment was great. Later when asked about their experience, those in the one-dollar condition rated the experience more favorable than those in the $20 condition. Being seduced to lie for one dollar brought about more attitude change, whereas those in the control, and $20 conditions, rated the experiment negatively.

It follows that if we want to induce change we have to offer some incentive to arouse interest, but not so much that the person will feel justified in the compelled behavior. This has implications for childrearing as was shown in the experiment by Aronson & Carlsmith (1963). The experimenters showed nursery school children a set of five toys and asked how much they liked each. The children were then told that the experimenter had to leave the room, but they were free to play with all the toys except the second favored toy. In the mild threat condition, the child was told that the experimenter would be “annoyed”. In the severe threat instruction, that he would be “very angry”, and that all the toys would be taken away.

When the experimenter left the room, none of the children played with the forbidden toy. However, dissonance theory predicted that only the children in the mild threat condition would feel tension between their desire to play and their behavior. They therefore reasoned that these children would resolve the feelings of dissonance by downplaying the value of the toy. The children in the severe threat condition should feel little dissonance since the threat justified in the child’s mind why they should not play with the toy. As expected from dissonance theory, children in the severe threat condition continued to evaluate the toy favorably, they had not changed their minds. On the other hand, those in the mild condition changed their attitudes to less favorable or at least neutral. The compliance was enduring as even six weeks later the children from the mild threat condition were still derogating the toy (Freedman, 1965).Thus it would appear that mild threats is the way to go if a parent wants to encourage attitude change. Would that also work for adults?

12.4 Culture and dissonance
When working with the Aboriginals of Australia in a variety of capacities, many years ago, we observed that they were not particularly bothered about many things that bothered European descended people. If they showed up late for a meeting, that would not require an apology. Something just changed on the road to the circus, and we should understand that. Cognitively inconsistent thoughts may be a culturally bound effect, a result of societies that value consistency. Support for this idea has been found in several studies. In one study (Heine & Lehman, 1997) Japanese students displayed less dissonance when compared to Canadian participants.

Sakai (1981) in his study, however, found dissonance effects for his Japanese students if they were led to believe that other students were observing their behavior. We know from other studies that Asian people are more aware of others, and are more oriented toward the community and the reactions of other people. Hence if you can prime such awareness in Japanese participants, it should produce larger dissonance effects. This priming procedure produced dissonance effect in the study by Kitayama, Snibbe, Markus, and Suzuki (2004). For those cultures that are community oriented, dissonance effects may mainly have to do with social approval or disapproval whereas for western societies dissonance occurs more in connection with the ability to make good choices.

All cultures find some behaviors dissonant, but under very different circumstances. Those living in Asia express attitudes depending on the situation they find themselves in, because social harmony is an important value. Those in the west are also developing more tolerance for inconsistency, and often hold ambiguous attitudes. Some may favor the death penalty for certain reasons, but abhor it for other causes. Consistency may therefore be more in the nature of a culturally expressed value, rather than a cognitive way of organizing our world (Priester & Petty, 2001).

13. Self-perception theory
Suppose someone asked you “do you like to go to the movies?” You think for a moment and then say “well I go twice a week, so I must like movies!” This is an example of Bem’s (1972) self-perception theory. We do not really consciously know our attitudes; we look at our behavior and infer our attitudes from how we act and the situations in which our behavior occurs. Self-perception theory makes the same predictions as dissonance theory, but for very different reasons. For example in the experiment where the participant was paid a dollar or 20 dollars to tell someone that a very boring experiment was enjoyable, the individual in the one dollar situation is in dissonance when he lies. However, self-perception theory can also explain the results. The participant was paid only a dollar to lie, and that is not enough to justify a lie, therefore the participants think they must really have enjoyed the experiment. In other words, alternatively, the participants examined their behavior to determine their attitudes as self-perception theory predicted.

Self-perception theory is a social perception theory. People come to an understanding of their own attitudes and that of others by means of observation. Bem would argue that people often have no attitudes to report. People who live socially isolated lives, who are uninvolved in the happenings in society, and that is most of the people in the world, have no attitudes based on direct experiences. They observe when people stand up for the national anthem and infer patriotic attitudes. We see people say the pledge of allegiance in the US and we infer their attitudes toward the state. Those who say the pledge infer the same patriotic attitudes because saying is believing!

We watch other people act in a variety of circumstances, and infer from the behaviors their attitudes. We see people go to Church and infer religious attitudes, we read of people in the drugs scene and infer indifference to laws and social convention, we see people laugh and think they must be happy. Likewise we look at ourselves, because the behaviors we engage in are self-revealing, and tell us about our attitudes. We hear ourselves say something, and from that understand our attitudes. In one study, people who were anxious about an upcoming test were led to believe that the anxiety came from white noise delivered by their headphones. Those who were given this information were subsequently more calm and confident (Savitsky, Medvec, Charlton, & Gilovich, 1998).

James (1890) drew similar conclusions a century earlier when he said that we infer our emotions by how our bodies function. We take an examination important to our future and feel our heart pump, our hands get wet, and conclude from these physical symptoms our psychological state of anxiety. Often our emotions fall into line after our physical expressions. It is difficult to smile and still feel grumpy you could try it yourself. If you put a pen in your mouth holding it with your smiling muscles, will you not find the cartoons in the paper more funny? (see Strack, Martin, & Stepper, 1988). Now try for the opposite effect by holding the pen with pursed lips, how does that influence your feelings about the cartoons?

Other researchers have been able to elicit similar emotions from facial expressions (Laird, 1974, 1984; Duclos, Laird, Schneider, Sexter, Stern, & Van Lighten, 1989). From our observations of other’s facial expressions we develop empathy, especially if we synchronize our movements, voice, and bodily postures with others (Hatfield, Cacioppo, & Rapson, 1992). Feeling the same as others (empathy) may explain our attraction to happy people and our desire to avoid those who are depressed.

14. Evaluating the dissonance theory and the self-perception theory
People adopt attitudes or change for entirely different reasons in dissonance and self-perception theory. Festinger would say that attitudes are very enduring predispositions to act a certain way. When people behave in ways that are inconsistent, it produces unpleasant feelings that cause the individual to reevaluate his attitude. Bem, on the other hand, thinks of attitudes as somewhat causal in nature. We often do not know our likes or dislikes, but we infer these as we reflect on our behavior. We know that many people do not really have affect-based attitudes, but possess stereotypes passed on by socialization. Consequently, when people have few experiences with the attitude object, or when people are not involved in the issue and it has little importance, the individual may infer their attitudes from how they behave (Albarracin & Wyer, 2000). This is as Bem would predict. However, when attitudes reflect more enduring issues that involve the person at a basic level, dissonance theory would better explain attitude change.

The process of attitude development and change is also different in the two theories. Dissonance theory hypothesizes that inconsistency between behavior and prior attitudes produces an unpleasant feeling in the individual, which is resolved by attitude change or adjustment. The unpleasant tension motivates change in our attitudes. Self-perception theory on the other hand would suggest that the process is rational, not emotional, as we examine our attitudes based on our behavior and the situation. Studies generally support the idea of arousal and therefore dissonance theory, when people act contrary to their true beliefs (Elkin & Leippe, 1986; Elliot & Devine, 1994; Harmon-Jones, 2000; Norton, Monin, Cooper, & Hogg, 2003).

How can we then reconcile the findings of the two theories? The studies on dissonance theory do indeed create emotional arousal as predicted. However, the dissonance results are also based on self report as explained by self-perception theory. Are both theories right? Today we see a consensus among social psychologists that dissonance theory applies when the inconsistent behavior is clear to the individual, and is important to him. Self-perception theory applies more to attitudes that for lack of experience are vague to the individual, and of little importance. Human behavior is complex, but sometimes people are simple, and have few experiences upon which to base their attitudes. Under these conditions they naturally look to others and their own behavior for explanations. Research has shown that a surprising number of people have weak or ambiguous attitudes suggesting the importance of self-perception theory. Furthermore, self-perception theory has shown that important social attitudes can be changed through self-awareness including the desire to contribute to the common welfare (Freedman & Fraser, 1966), and an awareness of how strong we feel about topics (Tice, 1993). Therefore, self-perception theory deals with more than the trivial, and engages also important topics. How do we change behaviors like smoking? It may prove more complex than just creating dissonant feelings. Self-perception theory would recommend self-awareness. At other times dissonance theory is important. Poignant experiences have left the individual with enduring predispositions to act. Those who experience war first hand develop very enduring attitudes toward violence as a means of solving conflict. We can conclude that dissonance and self-perception theories are both needed to explain attitudes.

It is important to remember that self-deception always plays a role in perception. You may think that only others behave in irrational ways, while that is not true of your own thinking. It is therefore likely that you believe that dissonance rationalizations are just something that others do since your attitudes are rational (Pronin, Gilovich, & Ross, 2004). However, we all rationalize to some degree about important social issues like war or global warming. We need to counteract both dissonance, and in the process also become more self-aware.

15. Self-presentation theory
One basic fact of human existence is our interrelationships with others. As a consequence of this interdependence, we care what other people think, and we work hard on developing an acceptable social identity. Self-presentation theory asserts that making a good impression is the primary basis for attitude development. We are motivated by our desire for acceptance by our peers and reference groups. By displaying consistent attitudes we seek to become more secure in acceptable social identities (Leary & Kowalski, 1990). In the pursuit of social acceptability we will say what it takes to win others over to our side, often with hypocrisy and insincerity.

Self-presentation theory suggests that many of our behaviors are shallow, and are often expressed as a means of managing the impression we make. It follows that our attitude expressions are motivated by a desire to avoid offense. We do not like to be the bearers of bad news, since that too may form a bad impression (Bond & Anderson, 1987).

According to self-presentation theory we never truly know others, because people are chameleons who change their attitudes to fit the environment. Likewise people change their attitude-based behaviors to fit the expectations of others. In this theory, attitude formation and change come about. We are social antennas attuned to acceptable attitudes, and our role is one of articulating these as we change our social environment. Some attitudes may be appropriate at home, others at the job, still others in cultural or political institutions. Attitudes therefore serve primarily an adjustment function helping us adjust to the demand of the social environment. In the process we often express attitudes in which we do not believe (Snyder, 1987; Zana & Olson, 1982; Snyder & DeBono, 1989; Snyder & Copeland, 1989).

As we have noted elsewhere, the desire for approval is also a personality trait, and people vary in how important it is to make desired impressions (Larsen, Martin, Ettinger, & Nelson, 1976). Those who care less what others may think are more internally motivated, and are therefore more likely to express sincere attitudes that they truly feel and believe (McCann & Hancock, 1983). People low in need for approval spend less time self-monitoring or worrying about what others think as they do what they think is right. Are most people anxious to fit into society, or do they express sincere self-relevant attitudes? How about you, do you use impression management so you can get good grades or make a good impression with parents and significant others?

Part of a good social image, at least in western societies is to “appear” consistent. Consistency reflects for many a person’s integrity. In expressing our attitudes, we try to have people see us at our ideal self. However, this too may be based on our desire to be acceptable to those that matter in our lives. In self-perception theory, we are consistent in our behavior, not because we feel dissonance, but because consistency is a cultural value.

16. Expectancy-value theory
We have already discussed the functional value of attitudes. The Self-presentation theory promotes the idea that attitudes are held because they help us in social adjustment. Social-expectancy theory reflects more the direct benefits of attitudes in bringing us rewards, and helping us to avoid punishment. It is a theory that logically follows from the capitalist system where the profit motive predominates. Attitudes are formed as a result of a rational process where the individual examines all the cost and benefits associated with a given attitude position. Which attitude alternative brings the highest rewards (Edwards, 1954).

In more formal terms, Edwards suggested that people seek to maximize outcomes in society by assessing the value of the particular outcome, and the likelihood that the attitudes will produce the outcome. You are very anxious to achieve a job promotion, the increase in income is highly valued. Do you believe that expressing agreement with your boss on particular issues will make it more likely that he will support your promotion? Then expectancy theory suggests you adopt his attitudes with that expectancy in mind. On the other hand, maybe you will lose the esteem of your fellow workers if you brown nose the boss. We humans look at the balance of incentives where goals may be in conflict and adopt the course that is likely to maximize gains. Expectancy theory describes people as rational and calculating decision makers. We can see many examples from history where people manipulate others in order to obtain high office and personal gain.

Summary
Attitude theory is a central topic in social psychology, and a field that is studied from the beginning of the history of our discipline. The structure or components are defined in this chapter. Each attitude has an affective, a belief, and a behavioral component. Attitudes are oriented toward specific objects that can be other people, ideas, or things. We expect a consistency between the components. Generally an attitude is manifested by some positive or negative feeling toward the object, a supporting set of beliefs, and expressed by certain behaviors. The chapter also discussed when that does not occur, when attitude-behavior inconsistence is apparent.

There are those who think, based on identical twin studies, that attitudes have a genetic basis. However, most of our research has researched a social basis for attitude formation. One or another component may dominate in attitude development. For some people attitudes are based on what they know. Affect, however, plays the dominant role for many attitudes also affecting important cognitive issues such as which candidate to support in elections. Some attitudes express a person’s underlying value system, and are based on reason and memory. Other attitudes are formed from direct experience. People can also develop attitudes toward a variety of objects without any personal experience as we see in prejudicial behavior.

Theories of attitude formation rest on the classical viewpoints of learning theory including conditioning, reinforcement, and social learning. Functional theory has made major contributions by suggesting that attitudes are formed in response to the basic needs of the individual. Functional theory responds to the why of attitude development, but also suggests the how of attitude change. We must appeal to the functions if we hope to change these in a more desirable direction. Research is described for the several functions. In the utilitarian function, attitudes serve to maximize rewards and minimize punishment. The ego defensive function suggests that many attitudes are developed in order to maintain a positive self-image and control our anxieties. The research on terror management shows that this function may have very broad implications, not only for philosophy, but also for creativity as we search for some permanence in our temporary existence. Attitudes may also give expression to our underlying values that we have obtained in the socialization process from parents and reference groups. For example, children often manifest similar political and religious attitudes to that of their parents. Attitude functions are based on selective memory and perception in organizing our world. We tend to value information supporting our viewpoints more highly, and it is also more assessable in memory.

We cannot evaluate the literature unless we understand something about how attitudes are measured. The various attitude scales have been developed to address several measurement problems. These include issues of unidimensionality asking does the scale measure a single dimension. Other measurement issues include the reliability or consistency of the results over time or within the scale. Validity asks the question: does the scale measure what it purports to measure? Researchers have developed several techniques to address these issues. Reproducibility refers to whether we can reproduce a person’s individual responses on a scale given that we know his total score. It is just another way of saying do the statements fall along a single dimension. Both Guttman and Mokken have developed methods to assess this issue.

Bogardus initiated the study of attitudes by means of his social distance scale. It gave the researchers a rough estimate of stereotypes toward various social groups. This was followed by Thurstone’s method of equal appearing intervals, which supplied information about the content of attitudes, and responded also to measurement problems of reliability and validity. Likert developed a method with equivalent utility, but much easier to construct. Guttman and Mokken addressed the issue of reproducibility and unidimensionality.

Contemporary research shows activity on a variety of attitude objects from attributed power to illegal immigration. These topics can also be addressed by single item surveys, but the advantage of scales is the assessments of reliability and validity. Also the results of survey depend greatly on the exact wording. Even apparently minor changes in words used can produce dramatic differences in responses. It is important to remember that we are discussing explicit attitudes in this chapter. We can only measure that which is assessable to the mind, but people may have opposing implicit attitudes of which they have little awareness.

Are attitudes useful predictors of behavior? The LaPiere study caused consternation as social psychologists observed an apparent inconsistency between initial behavior and subsequent attitudes. We should remember that LaPiere probably did not study attitudes, but rather stereotypic responses derived from a prejudicial society. Other causes for attitude-behavior inconsistency are the many different factors that compete for attention. The social desirability of attitudes causes some people to refrain from expressing these in order not to offend those with influence. To evaluate research, we need to have the long view in examining attitude change, and ensure a good fit between measurement and behavior. It does not matter much to predictability whether the attitude measured is specific and narrow, or general and broad. What is required is that measurement and behavior must be at the same level of specificity. Broad attitudes are important in understanding the framework for more specific attitudes and the supporting norms. Other sources of attitude-behavior inconsistency derives from having no direct experience with the attitude object, no accessibility which allows for spontaneous expression, and the presence of automatic attitudes which require little thought and therefore produce no dissonance. Theories suggest prediction is improved if we know a person’s attitudes, subjective norms, and perceived behavioral control.

At times we can observe that attitude development follows expressed behavior. From studies on counter attitudinal acts, results show that dissonance depends on the level of incentives, our feelings of choice, the effort required, and if the attitude is self relevant. Attitudes also follow compliance in several studies.

The self-perception theory of Bem states that we look to our behavior to determine our attitudes. Dissonance and self-perception theories predict similar behaviors, but for very different reasons. Dissonance theory is more useful in understanding attitudes that the individual considers important and self-relevant whereas for self-perception theory the primary purpose of attitudes is to make a good impression and attitudes therefore serve primarily adjustment functions. In self-presentation theory, attitudes are an expression of our desire for social acceptance. The chapter concludes with a discussion of expectancy-value theory that states that attitudes are developed or changed by the desire to obtain rewards and avoid punishment.




Being Human. Chapter 6: The Influences Of Group Membership

Social psychology is about the influence of others on our behavior. There are many influences on our behavior as represented by the varying chapters of this book, but group membership is central to social psychology. What is a group? A group consists of two people or more who interact directly. People in groups are to some degree interdependent because their needs and goals in life cause them to have influence on one another (Cartwright & Zander, 1968; Lewin, 1948). Groups are so central to our lives that we rarely give a thought as to why we join. Clearly groups have many benefits, some related to our very survival, which helps define why we join. Some researchers would even say group memberships reflect innate needs tied to survival and derived from our evolutionary past (Baumeister & Leary, 1995). Life with others allows for many benefits that include (in our early history) protection from predators of either the animal or human variety. Other benefits may include assistance in child rearing, or hunting and gathering, or in collaborative agriculture that eventually freed human society from ever present hunger. In fact in all cultures people are motivated to seek memberships in a variety of groups, and often to maintain their affiliation at all costs There may be even an innate need for social contact people isolated long enough will as a consequence often display symptoms of mental disease or otherwise “lose” their minds (Gardner, Pickett & Brewer, 2000).

1. What are groups?
Researchers have observed that group structure is created almost immediately after a group is formed. For example Merei (1949) noted that after only a few meetings children began to differentiate roles and establish informal rules as to who would sit where in the room and who would play with certain toys. This differentiation of expected behavior is referred to as group structure (Levine & Moreland, 1998). Social norms are the behaviors and rules that are considered standard and appropriate for the group. In one study young teenage girls decided what boys were considered eligible, and one accepted rule among the girls was to not pursue boys who were already attached to someone else (Simon, Eder, & Evans, 1992).

Groups also define the roles of group members; i.e., the division of labor specifying required behavior by each member. Role specification would define the responsibilities of the head of an organization, and the expected behaviors required by other members of the group? Also, the group determines the status of each member. What prestige does the individual have within the group, and therefore what potential or actual leadership position or authority is vested in each member. Even in groups where there is some formal equality, research indicates that some individuals emerge as more powerful than others. In the jury system, even though initially there is no difference in the selection of members, when deliberation begins some members quickly become more influential and one is voted to become the jury foreman or leader. Generally groups are formed to achieve certain goals, and those who are perceived to be effective toward that end are given high status. This is also called expectation theory (Berger, Webster, Ridgeway, & Rosenholtz, 1986).

A community wide organization is not a group. For example being a member of a university is not a group since one does not interact with all members of the student body. Being a member of the military or a church does not suggest group membership since again they offer no opportunity for all members to interact. Likewise being on an airplane with other passengers does not form a group since again people have few opportunities to interact. That of course could change if the plane underwent some emergency requiring passengers to interact to save their lives. Generally groups consist of two or three members to several dozen participants. To be a group the situation must allow for mutual interaction and interdependence.

Groups emerged out of our evolutionary past since they performed many important functions for the individual and society. Groups assist us in forming our identity, who are we and what are our values. This is easy to see among students who often wear clothes, e.g., t-shirts with some slogan identifying group membership such as being fans of musical groups, although a fan group like a group of university students as such is not to be considered a “group” automatically because interaction might not define large numbers of students.

So all groups have in common that the members interact and therefore influence one another. Groups also serve as a form of identification between those who are like-minded and those who are not. Turner, Hogg, Oakes, Reicher, & Wetherrell (1987) would say that groups encourage the feeling of “us” versus “them” or those who think differently. People do not join groups to be challenged in their beliefs, or for alternative viewpoints. Generally people join groups to be reinforced in their already existing viewpoints (Levine & Moreland, 1998; George, 1990). Another feature of groups is the role they play in reinforcing social or group norms. These powerful determinants of our behavior shape our behavior, and groups encourage conformity. If we do not follow the group norms we may be shunned or asked to leave (Marques, Abrams, & Serodio, 2001).

1.1 Groups define our roles
A very important function of groups is specifying the roles played by members. The manager and worker play distinctly different roles in a work group. Roles specify how individuals occupying certain positions should behave. Role specification, depending on the values of the group, may be a positive factor leading to higher productivity or satisfaction, or alternatively role rigidity may lead to autocratic behavior leading to stagnation. Roles can be very helpful since they let people know what to expect from each other, thus making behavior more predictable and efficient in many cases. When the group operates with clearly defined roles, performance and satisfaction increases (Bettencourt & Sheldon, 2001).

At times social roles may be counterproductive and lead to anti-social behavior. We see through the experiences of war how some people get lost in their group identity, and under the cover of that identity commit brutal acts (Fiske, Harris & Cuddy, 2004). Zimbardo and his co-workers brought to our attention (Haney, Banks, & Zimbardo, 1973) how easy it is to have the role take over the identity of the individual. In their experiment students were assigned as either prisoners or guards in a simulated mock prison. The experiment had been designed to last for two weeks, but was stopped after 6 days, because the participants were clearly changing in a negative way as a result of their role-playing. The “guards” became brutal in their treatment, devising ways of humiliating their fellow students. Those playing the role of “prisoners” also changed and became more submissive and compliant in the face of the abuse. Clearly, roles can have even stronger effects in the real word, as in the case of real prisons. We need only to look at the abuse in Iraq to see a disgusting example of behavior changed when “normal” citizens in the armed services play the role of guards, and when the norms of the US armed forces allow such abuse. The example of prisoner abuse in the US prison camp in Cuba, Guantanamo Bay, also comes to mind. The effect of roles on aggressiveness may also be exacerbated when people with aggressive personality dispositions feel attracted to roles as guards (Carnahan & McFarland, 2007).

1.2 Gender roles
Currently societies all over the world are experiencing many changes pertaining to sex roles. In the past women in a variety of cultures were expected to take on the role of wife and mother, and to be primarily responsible for the home. With emerging modern societies this gender role specification has largely changed. In socialist societies the change came about for ideological reasons favoring the equality of the sexes, and the needed productivity from women’s intellectual and cultural contributions. In the case of capitalist societies the change came about as a consequence of long struggles by feminists and their supporters for equal opportunity and treatment. The First World War, 1914-1918, contributed to gender role changes. When the men went to fight during World War I the women started working at many of the men’s jobs in factories and other locations. When the war ended, women did not accept the re-establishment of the traditional roles. In the 1920’s women were granted voting rights in many European countries and in the US. The feminist movements of the 1960s, and onward also greatly changed the nature of gender roles.

The changes in role expectations of women caused, as might be expected, much conflict. Some of the conflict came as a result of women taking on increased burdens. In addition to now working outside jobs, she was also expected to maintain the traditional role of primary childcare provider, and provide for the general maintenance of the home. Some evidence would suggest that this expectation is still present in our modern world (Brislin, 1993).
One interesting aspect of role changes is that they also changed women’s attitudes and personality traits. When women’s status improved in society so did their assertiveness (Twenge, 2001). In other words gender roles are powerful determinants of our personalities, and how we generally feel about ourselves and our lives (Eagly, & Steffen, 2000).

1.3 Group cohesiveness
Groups vary. Some are very temporary where membership has only fleeting importance. Student groups are of this type since membership ceases upon graduation. But in other cases the ties between group members may be very tenacious and enduring, in some cases for life. Of course the family comes to mind. But having common goals as found in political groups or those based on common religious beliefs may also create harmonious groups with great endurance. In these groups there are many qualities which bind the members to each other, and which serve to produce mutual liking and respect. The term group cohesiveness is generally used to describe such close-knit groups that have an enduring character and promote mutual liking and respect.

One could say ideally all social groups would have such a character. Unfortunately other factors also play a role. For example in university departments, collegial groups that would benefit greatly from cohesiveness often do not because of professional jealousy or competitiveness. Environments that reward excelling at the expense of others produce conflict. Generally speaking, cohesiveness produces a better group atmosphere, and makes it more likely that members stay together and combine in their efforts to produce better group products, and seek to have new members join (Levine & Moreland, 1998).
While many factors may effect the cohesiveness of a group the liking relationship is probably most important. When people have strong feelings of friendship for one another, cohesiveness is high (Paxton & Moody, 2003). Liking improves the effectiveness of group performance as such groups will manifest less dysfunctional conflict, and interact more harmoniously. Groups, in some very significant ways, determine who we are, and our sense of identification with the group is important in feelings of group cohesiveness. Political and religious groups all help the individual connect with the larger world, and express deeply held attitudes and values (Van Vugt & Hart, 2004).

Some groups are important because they serve these or other instrumental needs. Satisfaction is not always guaranteed. Although in many cases our attraction to the group is based on anticipated positive consequences, at times a group stays cohesive because there are no alternatives apparent. People may stay in a job they despise because the salary is high, or there are no good alternatives. Many students stay in courses they have little enthusiasm for because these courses are required for graduation. However, when group members enjoy the company of each other and accept the goals of the group, satisfaction and morale tend to be high. Such cohesive groups are more likely to enhance productivity if the norms of the group include hard work and dedication (McGrath, 1984).

2. Social influences
Hence we shall discuss three primary examples of group influences: social facilitation, social loafing, and deindividuation.

2.1 Social facilitation
The initial question addressed by social psychologists was, do people act differently when other people are around than they do when alone? Does the presence of others produce more energy in pursuing our tasks, or is it more likely we become lazy in the presence of others. These and many other questions have been addressed in early as well as very recent research. Triplet (1898) completed the first study on social facilitation. He conducted what is generally regarded as the first experiment in social psychology. He invited a group of children to his laboratory and asked them to cast and reel in fishing lines as fast as possible over six trials with rest periods between. In three of the trials the child performed by himself, in the other three there was another child present doing the same task. The children tended to reel in faster when they were in the presence of another child, a phenomena that Triplet called social facilitation. Later experiments confirmed these findings (Gates, 1924), and extended the social facilitation findings to animal species (Ross & Ross, 1949), however, this early research also included some contradictions. On more complex tasks the presence of others produced inhibition of performance, as for example in solving arithmetic problems (Dashiell, 1930). These different results suggested two possibilities. Sometimes the presence of others helps, and in other cases it hurts performance.

2.1.1 Social facilitation on simple and complex tasks
Karl Marx said in Das Kapital ” Mere social contact begets…a stimulation of the animal spirit that heightens the efficiency of each workman”. In other words he anticipated that social facilitation would serve as releaser of energy. The presence of others energizes people to perform at higher levels if the task is simple. Zajonc and his co-workers (Zajonc, Heingartner, & Herman, 1969) presented a theory that explained in an elegant manner when the presence of others helped facilitate performance. People do better on simple tasks in the presence of others, but do worse on complex tasks (Schmitt, Gilovich, Goore, & Joseph, 1986; Bond & Titus, 1983). Doing something simple like riding a bicycle leads to performance at higher levels when others including spectators are present. We see this heightened performance in the achievements during the Olympics when world records are set in front of millions of fans present or watching on television.

However, if one is working on a difficult math problem, then the presence of others may be diverting and flustering as a solution is sought. The reason for the lower level of functioning is the psychological fact that we cannot easily attend to two things at the same time and the presence of others may divert our attention.

In addition people, as social animals, are always concerned about how people evaluate them. People are worried about doing poorly in the presence of others, and this evaluation apprehension causes us to do poorly on complex tasks. Evaluation apprehension has been verified in numerous studies (Geen, 1989; Thomas, Skitka, Christen, & Jurgena, 2002). One important question raised is: is it the mere presence of others that causes evaluation apprehension? The answer to that assertion is no. It is the possibility of being evaluated that causes the apprehension (Cottrell, Wack, Sekerak & Rittle, 1968). Cottrell et al show conclusively that it is our concern that others may evaluate us, and not just their presence, that produces the social facilitation affect.

So in summary, the presence of others may energize us on simple tasks if our individual efforts can be evaluated which produces alertness, but produces evaluation apprehension with complex tasks. Depending on the complexity of the task, distraction and attention conflict may hurt performance. From the perspective of Zajonc et. al. (1969) we respond to the presence of others with the most dominant response. In simple tasks the dominant response happens to be the correct response, but on complex tasks the dominant response of the individual is most frequently an incorrect response. On complex tasks what we have learned in the past is not a guide for a solution that presents novel challenges. Habituated responses do not solve the problems of science or society.

2.1.2 The effect of crowding
In the presence of others people are aroused manifested by physiological changes. People breathe faster, have a faster heart rate, perspire more, and have higher levels of blood pressure from the mere presence of others (Geen & Gange, 1983; Moore &Baron, 1983). In crowds the presence of others may intensify the already prevalent mood. People who are mourning feel grief more intensely at a eulogy and those who are excited at sporting events express more freely their fanatic expressions. Negative behaviors such as lynching are also more likely when a crowd is organized and prepped for hostile actions. In crowds friendly people are seen as more friendly, and unfriendly people are disliked even more. Again task completion may be affected. Crowding has negative affects on complex tasks, but does not negatively affect simple or routine behaviors (Evans, 1979). Crowding is the subjective feeling of not having enough space. This experience is different from objective measures of population density, i.e., how many people occupy a given space. Crowding is the physical discomfort felt from being cramped, and desiring more space especially when with strangers. If one is with a loved one on the other hand, he/she may desire very little space as most of us are in fact happier with less space. However, in a location at the beach or in the mountains among the public even a few people can provide a feeling of crowdedness. Crowding is always experienced as unpleasant.

The individual experiences sensory overload when being crowded (Milgram, 1970; Baum & Paulus, 1987). In addition people in crowds feel less in control (Baron & Rodin, 1978). For example crowding produces less control in moving about, in maintaining privacy, or otherwise managing the environment. We attribute negative meaning to being crowded. On the other hand at a sporting event people are distracted by the action and do not feel the unpleasant consequences of high density. High density on a bus or train is less distracting, and people may feel stress.

Culture has a significant effect on whether a person feels crowded (Evans, Lepore, & Allen, 2000). People from more collectivist cultures prefer closer physical distances in conversation, and are less affected by high physical density as compared to those living in more individualistic cultures such as those in Western Europe or the United states.

2.2 Social loafing: Another consequence from the presence of others
At times the presence of others may not produce increased energy or task completion. This phenomenon is called social loafing. We have all met people who seek a free ride in life, and who do as little as possible to survive. When we become members of groups it often allows us anonymity, where the individual identity is merged into that of the group. The individual in the presence of others becomes less noticeable, and therefore less worried about evaluation. Social loafing occurs when the individual believes that individual performance will not be noticed, but rather the overall group product is evaluated. In a factory, for example workers may earn salary based on overall productivity rather than individual performance. In collectivist farming, the individual farmer has less responsibility, but is judged as part of collective performance. Social loafing is therefore the tendency of people to perform worse on simple tasks in the presence of others, because of anonymity of individual contribution (Williams, Harkins, & Karau, 2003).

Performance in groups is affected by how important the individual perceives his contribution is to the outcome and how much the individual values the goal. If the individual’s effort is getting lost in the crowd and cannot be identified that situation is likely to produce lower levels of performance. Social loafing refers to the relaxation in effort when the individual cannot be held responsible for his/her production, and his/her work cannot be identified.
Consequently the solution to social loafing is straightforward. Make sure that each individual’s performance can be identified, and therefore evaluated. Social loafing is moreover greatest among strangers, but seems to disappear when the individual works with people he knows well, or works in a group that is highly valued by the company or by society. Social loafing is reduced when offering appreciation in the form of higher salaries or other social rewards (Shepperd & Wright, 1989). Also it is less likely to occur when the tasks required are complex, interesting, meaningful and identifiable. Among highly motivated workers there is also sometimes the tendency to compensate for the inadequate performance of others (Williams & Karau, 1991). This is known as social compensation and occurs when the individual believes that others do not work adequately, and the outcome or product is important.

Sometimes an individual lacks information about the productivity of others. If he is highly motivated how does he handle this situation? Plaks & Higgins (2000) found that people rely on social stereotypes to assess productivity. Based on the stereotype that females do not perform as well as males on mathematics, the researchers found that males worked harder when paired with a female. When a colleague is unwilling or unable to produce at high levels, motivated workers seek to compensate and work harder.

2.2.1 Cross cultural differences in social loafing
Some studies have found evidence for social loafing in a variety of societies like Thailand, India and China (Karau & Williams, 1993). However, there is also evidence for cultural differences where social loafing is greater in individualistic cultures and occurs less in more collectivist societies (Gabrenya, Wang, & Latane, 1985).

On collective farms the Russian peasant was given small plots of land to produce for his own use and for sale. These plots constituted less than 1 percent of the total agricultural land, but produced 27 percent of the output in the nation. Similar results were found for Hungary where private plots accounted for 13 percent of the land, but approximately one third of the total production (Spivak, 1979). In China when farmers were allowed to sell food grown in excess of state requirements, food production increased by 8 percent each year after 1978 (Church, 1986). Are these improvements related to social facilitation or social loafing? When the individual feels he has no personal investment, and efforts are not individually appreciated, production is likely to decrease. However workers who grow up in a group-oriented society, where the individual is taught the importance of the welfare of the group, and may perform better working in groups.

The challenge in collective societies is not to give up the goal of a common and harmonious future, but to provide the individual with feelings of ownership of social production, and develop techniques of rewarding individual performance. This reward system must obviously go beyond the “heroes of labor” awards in the Soviet Union that likely were instituted in response to social loafing. Real feelings of ownership of social property and management must be encouraged. That is a high challenge, but critical to the future of societies that follow the socialist path.

Capitalist societies encourage individual goals and achievements that results in higher productivity levels. This makes it less likely that the individual worker identifies with group goals. As in all research any principles evolved on social loafing must be verified in cross-cultural research, particularly research that has significant effects for social policy. In some ways the ideals of a collectivist society must become internalized and accepted in a genuine manner, and not be based on threats. If the goal is compelling to the individual, then the team effort will increase. We are not speaking of empty promises of the distant future, but real gains for society that can be observed and measured. People loaf less when they are challenged, when the work is motivating or appealing (Brickner, Harkins & Ostrom, 1986). When people see their own individual efforts as indispensable, work productivity increases (Kerr, 1983). Therefore it is not the ideology of a society, whether individualistic or collectivist, that matters. What matters are the perceived individual incentives provided that gives the worker a stake in the future development of society. This is vividly demonstrated by the Kibbutz system in Israel. This collective socialist farming system actually out produced Israel’s private farms (Williams, 1981; Leon, 1969). Clearly the collective farmers in this socialist system felt that their individual efforts mattered and felt an ownership of management and social property.

2.2.2 Gender differences in social loafing
Women tend to be higher in what is called relation interdependence, i.e., they care more about personal relationships, tend to be more aware of these, and focus their attention on others. Do these traits have an effect on social loafing? As it turns out Karau & Williams (1993) found evidence for less social loafing in women as compared to men. Other evidence for less loafing in women is also found in other studies (Eagly, 1987; Wood, 1987). Women do of course engage in social loafing just like men, but they do so to lower levels. Likewise men in Asian cultures also loaf, just to a lower degree than men in western cultures.

In summary we need to know several conditions to determine whether the presence of others facilitates or hinders performance. First is the individual’s efforts evaluated so there are personal consequences for the quality and quantity of performance? If the performance is evaluated, then the presence of others leads to higher levels of arousal and energy. But if performance cannot be evaluated, when the individual is just a number and anonymous in a large group, then social loafing is likely. Secondly, the complexity of the task makes a difference. Social facilitation research shows that people in general do better when confronted with a simple task when among others, but worse when performing on complex or difficult objectives.

2.2.3 General applications to work situations
For the management of workers doing simple tasks there should be ways to reward individual performance, or at least create individual evaluations of performance. In such circumstances evaluation anxiety produces better productivity. Social loafing also has implications for the physical arrangements of the work situation. On simple tasks workers perform better when directly observed by the supervisor since social loafing produces lower performance on simple tasks. On the other hand if the worker is required to perform complex tasks it is important to lower performance anxiety and place workers in situations where they are not observed in order to reduce anxiety and produce better solutions. In today’s offices workers performing complex tasks are often placed in open office locales. This is done to create openness and make everyone feel even the highest officers are assessable. Is that always the best working situation for those working on complex tasks? The research cited above would suggest that the physical arrangements of work situations should be tailored to the task performed, simple or complex. When the solution requires complex or novel responses and must be committed to memory it is best done without the arousal or distraction of others. Studying with fellow students can help maintain energy and motivation. However, preparing for a test that requires individual thinking and complex solutions is best done when working in some form of social isolation. Likewise in the work situation social facilitation would produce benefits for simple repetitive tasks, but as the difficulty level rises workers need the luxury of privacy.

2.3 Deindividuation
You probably recognize the fact that people do things in groups they would never do alone. For example, sometimes groups are transformed into vicious mobs bent on destruction and aggression. The football hooligans in Europe come to mind. In more serious cases we can see this effect also in the dismal history of lynching mobs in the United States who murdered thousands of slaves and free blacks during this dark time of history. Le Bon (1895) believed that groups became mobs through a process of social contagion where people lost their higher faculties of reason and moderation. In large mobs it is as if people descend to lowers levels of civilization where individual rational minds give way to an irrational “group mind”. Something different happens when we become part of a group. The group is both more and also different from a collection of individual minds. Deindividuation refers to the loss of individual identity and self-regulation, and the lower influence of moral values that occur in group settings (Diener, 1980; Festinger, Pepitone, & Newcomb, 1952). As individuals we have an interest in our appearance and how our behavior may be evaluated whereas in crowds people often become barbarians.

Zimbardo (1970) suggested that people in a deindividuated state are less able to observe themselves, are less concerned with social evaluations, less aware of the self, and more focused on others. Being in such a state may lower the threshold for behaviors which otherwise would be inhibited in the individual. Deindividuated people may participate in impulsive behaviors including murder of innocents or the sacking of public property. Zimbardo argues that people in many societies live in mental straitjackets where they always have to keep their impulses under control. Mob behavior may be liberating and allow for feelings of spontaneity. If we review cross-cultural societies we can see that nearly all national and cultural groups have events that allow some escape from the cognitive control. For example in Latin America during carnival people let go of their inhibitions. Other nations may have festivals of a similar kind. Sporting events also allow a similar release from our self-censorship. Society has an interest in allowing for venues that permit release from self-control whether through dancing or other cultural events. Such events permit the release of pent up feelings and frustrations.

A decidedly negative form of deindividuation is what is called suicide baiting. For some of us it is difficult to understand how anyone would encourage a suicidal person to jump from a tall building. Yet that is what frequently happens in the anonymity of large crowds gathered to view what for some is spectacle. Mann (1981) examined 15 years of newspaper accounts of suicidal jumps and found that nearly 50 percent included suicide baiting, where the suicidal person was encouraged to jump by some anonymous person in the crowd. Usually the baiting was associated with large crowds and darkness making individual identification less likely.

War is of course the ultimate form of antisocial behavior. The long and dark history of mankind is manifested by our determined efforts to kill one another in aggression and hostility. It is easier to kill in warfare because these conditions produce deindividuation. Soldiers feel excused from the usual prohibitions against barbarity when they cannot be held individually accountable, and when society places value on aggressive behavior. Watson (1973) investigated warfare in 23 non-western cultures to examine the effect of deindividuation on brutality. If the warriors were deindividuated before battle by wearing masks or painting their faces the likely outcome was more brutality found in the torture of enemies and the fight to death. It is instructive that in modern armies uniforms serve a similar function supported by attempts to stereotype and dehumanize the enemy before battle.

Deindividuation refers to the loosening of the normal restrictions we all feel when aware of personal values and societal constraints. When people are deindividuated they find it easier to perform both impulsive and deviant acts (Lea, Spears, & De Groot, 2001). In war we see many horrible acts committed by so-called “normal” people who would probably consider themselves upright moral persons. The massacre at My Lai comes to mind as just one of thousands of brutal acts committed during the war. It is truly a question of getting lost in the crowd thus displacing responsibility for violent acts to the situation or authorities and thereby escaping personal guilt. Getting lost in the crowd is a useful metaphor.

Mullen (1986) found support for the idea that the larger the mob the more savage the behavior. In a content analysis of newspaper accounts of lynching in the United States he found that the larger the mob the more savage the people were in murdering their victims. The larger the number of people the less the individual responsibility felt by the participant.
Deindividuation also works through increasing conformist behavior found in obedience to the norms of the group (Postmes & Spears, 1998). If the norms of the group include the right to take life if the person is of another race or nationality, then being lost in the crowd is likely to produce obedience to this dominant norm. Other contrary norms may be present of a personal nature. The apparent moral conflict between personal and group norms are not felt by many people as the power of the group norm overcomes in most cases individual consciousness. It is the norm of the group that determines at that particular moment the behavior of the mob, whether positive or negative. For some groups the norms are vicious, in others they are more benign. Behavior obviously differs whether one is a member of a lynch mob or intends to get lost in a crowd at a rock concert.

In other words, deindividuation is enhanced if the group is large allowing for psychological and physical anonymity. This explains why uniforms are often part of the deindividuation process as we see historically in the fondness of the Nazi’s for their uniforms and for uniformity. Why did the Ku Klux Klan wear sheets and hoods when performing their acts of terror against Black or progressive people in the United States? Why did the executioners in medieval times wear black and often were masked? Even today executions are deindividuated since the executioner is anonymous. Further, the act of killing is carried out by several participants diffusing responsibility. Anonymity is preserved and no individual needs to feel responsible.

Deindividuation occurs in the presence of distracting activities. If we yell at the referees at sporting events we do so because the norms permit us to do it, and we are anonymous. Later we may think more of what was said and feel chagrined at our uncouth behavior. In some cases we directly seek to be deindividuated to release ourselves from personal responsibility. Examples are dances and religious worship experiences where the individual gives up rational behavior in favor of closeness with others and overcoming aloneness.

2.3.1 Moving toward self-awareness
If loosing ourselves in the crowd makes us more impulsive, then perhaps a greater focus on the self could produce opposite effects. When we look inward, we focus on the self and on our values, and we become more concerned with self-evaluation. Research shows that under these conditions we become more concerned with whether our behavior conforms to our most deeply held values (Duval & Wicklund, 1972). Few people meet such high standards of self awareness, but there are always inspiring examples of some, like those who go on true humanitarian missions even knowing they may be killed or tortured by the very people they are trying to help. Experiments (Duval & Lalwani, 1999; Beaman, Klentz, Diener, & Svanum, 1979) have shown that people do indeed act more consistently with their innermost values if first made self-conscious by being placed in front of a mirror or an attending audience. For some people such self-consciousness is painful, as they become aware of the discrepancy between their values and behavior. Some conflicted individuals seek to escape self-consciousness through alcoholism or other forms of escapist behavior.

Many people are self-conscious to a painful degree as demonstrated in what we call the spotlight effect. The spotlight effect occur when we believe that we are scrutinized by others, judged by others, noticed and remembered by others, to a much larger degree than is truly the case. We believe others attend to us, while we ourselves do not attend to others (Epley, Savitsky & Gilovich, 2002; Gillivich, Kruger, & Medvec, 2002).
In conclusion, we have seen that the relationship between self-consciousness and behavior takes two paths. In the case of deindividuation, the individual loses self-awareness when in large crowds, producing less self-awareness and behavior in the direction of conformity to the immediate group norms. The resulting behaviors often are impulsive and destructive as we observe in mob behavior. The opposite, the second path, takes place when self-awareness and the spotlight affect produce motivation to behave with more propriety and in accordance with personal values and beliefs.

2.3.2 Group versus individual decisions
Are group decisions more superior to those of individuals? Groups influence behavior, sometimes for the better, sometimes with disastrous consequences depending on the norms of the group. Now let us address the issue of whether group decisions are better than the solitary decision. Intuitively we may think that the individual has only his own experience and knowledge of social reality so group decisions are better. A group would bring to the decision more experience, and an evaluative process that may, given the right circumstances, produce better decisions. What some research tells us is that more heads are better than one, if the group relies on those with the expertise (Davis & Harless, 1996). This, however, requires norms that encourage a focus on expertise and group goals rather than power or status seeking.

Group processes might however interfere with good decisions. Many group members exhibit streaks of stubbornness and an unwillingness to admit error, and therefore once committed to a goal are unwilling to change. Such ignorance of expertise is called process loss, i.e., when groups inhibit good decision making due to extraneous influences such as ego or dogma which are not relevant or useful to the decision being made (Steiner, 1972). Other forms of inhibition of the decision-making process occur as a consequence of communication problems, where people do not listen to each other, effectively tuning out important information. In yet other groups, some individuals are intellectual monopolizers who grab the limelight and dominate all the discussion. In some groups there is little trust and little communication. In these groups the important issues may never be discussed due to insecurity and fear of rejection.

2.3.3 When information is not shared
Sometimes there is insufficient information to provide a base for good decisions. It is a well established finding in social psychology that members in groups tend to focus on the information they have in common, and ignore information that each member may have separately and individually. Groups have a tendency to discuss only information that is shared by group members, and to exclude from the discussion information that is novel (Staser & Titus, 1985). Even if members of a group have useful, but novel information, chances are that this will not be discussed, or will be brought up so late in discussion that it has limited utility. In one study (Winquist & Larson, 1998), group discussions were coded for how much time was spent on each segment. The results showed the common knowledge effect; i.e., group members spend considerably more time discussing common information and little time on unshared information. This effect discounts the major advantage of group decisions that of making better decisions when carried out from a broader knowledge base.

The reasons that this effect occurs are relatively clear. When common information is discussed all have a shared framework that in turn produces greater ease and comfort in the group process. Everyone can participate when common information is discussed, whereas only a few when the information is novel. It is the rare group member that has sufficient ego strength to bring up novel topics and information. In general, group members who bring up commonly shared information are also valued more positively as compared to those who bring up information that is unique. A wise group would be aware of this fact, and wanting to make the best decisions would ensure that meetings are long enough so that novel ideas, typically brought up late in the discussion, may have a full hearing. The idea of comfort being a factor in the type of discussions also explains why groups show a confirmation bias. Groups seek out information that will confirm already existing viewpoints, rather than information that might challenge the status quo. Group discussions aim at justifying initial decisions rather than critically examining new information that might challenge previous decisions (Schulz-Hardt, Frey, Luthgens, & Moscovici, 2000).

One way to overcome the common knowledge effect and confirmation bias is to ensure that group discussions build in sufficient time to share novel information, and time to challenge the status quo (Larson, Christensen, Franz, & Abbott, 1998). Another way may be to assign specific topics as the responsibility of individual group members so each participant is responsible for bringing up relevant information. One or several members could be assigned the task to specifically bring new or novel ideas to the group. In relationships couples sometimes assign each other different household tasks. One partner may be responsible for paying bills on time, the other for making the children’s medical or dental appointments. Research has shown that such combined memory is superior and more efficient than the memory of either person alone (Hollingshead, 2001).

3. Groupthink: The outcome of faulty thinking produced in highly cohesive groups
In highly cohesive groups the decision-making outcome is sometimes disastrous. Generally this occurs when there is great stress, and groups are under social pressure to achieve consensus. In American foreign policy we see many examples of “group think” which has produced terrible consequences for the US and the world (Janis, 1972; 1982). Among the many fiasco’s that dominate the history of foreign policy in the US, we can mention several well-known to the world. The Kennedy administration, in its hostility to the Cuban revolution, sought to overthrow the Cuban government by sponsoring an invasion of about 1,400 counter revolutionaries trained by the CIA. Despite initial lies in the United Nations the role of the US soon became clear. The invasion force was decisively defeated and captured or killed after a couple of days combat. This event constituted a serious embarrassment to the US. History shows that the decision to attack Cuba was the outcome of conformity pressures in the council of the president that allowed the US to underestimate the popular support of the Cuban revolution, and demonize its leadership.

At another time in history Hitler and his group of cronies made a similar mistake in attacking the Soviet Union. Perhaps China also made such a mistake in attacking Vietnam. Another disastrous decision was the American war in Vietnam, and in particular the decision by the Johnson administration to send more troops to Vietnam. The outcome of that decision significantly increased the number of lives lost among American soldiers, and among the Vietnamese population. Other outcomes of groupthink include the decision by NASA to go ahead with the launch of the shuttle Challenger after being warned by the engineers that the O-ring seals might fail. This catastrophic failure happened and the rocket exploded killing all aboard. Probably you can think of many other examples from history in various European countries. The current foreign policy intervention of the Bush administration continues this pattern of foolish and disastrous decisions through its effort to “spread democracy” by invading sovereign nations. The Neocons responsible for current US policy (and their supporters elsewhere in the world) again seriously underestimated the will of their opponents to resist and inflict damage. As of this writing there is no solution to the bloodshed unleashed.

3.1 What is groupthink: antecedents, symptoms, and decisions
Groupthink refers to delusionary thinking that occur in highly cohesive groups where the pressure to reach consensus subverts critical thinking. Janis (1982) suggested that groupthink typically occurs in a highly cohesive group that is about to make an important decision for which it is not fully prepared. The group is excessively optimistic; it believes it is moral in decision-making and in full control of all important events, and therefore invulnerable. Within the group there is a strong desire for consensus that is achieved by suppressing dissenting information and discouraging the consideration of alternatives or the evaluation of undesired consequences. The group convinces itself that since it is morally superior there is no need to search for other relevant information. Further, since the group has no built-in procedure for evaluating alternatives to the one suggested or demanded at the start by the strong leader who chairs the group and strictly directs the deliberations.

Discussion within the group is limited and contributes to the unanimity with regard to the decision made. The group furthermore puts pressure on individual group members to conform. Dissenting group members are too fearful of rejection to object, and may even convince themselves that their doubts are not worth entertaining. There are no contingency plans made if things go wrong, because group members are convinced they are right. Moreover, portraying the opponent in demonic terms assists this process of delusion as stereotypes always fall short of reality. The stereotyping of historical enemies in European history led to some of the greatest policy failures in wartime. Groupthink results in shallowness in decision making due to the lack of information and the narrow or non-existent consideration of alternatives for action.

Groupthink as a concept has intuitive appeal and utility in examining many important historical decisions. The empirical evidence from the social psychological laboratory is more complex (Esser; 1998; Paulus, 1998). Tetlock, Peterson, McGuire, Chang, & Field (1992) found empirical support for the concept in 12 different political decisions. The factors suggested by Janis do not all find support in the laboratory, but the delusion effect of dynamic and controlling leadership is by and large confirmed. Janis’ work points to the obvious problems that derive from self-censorship, and from decisions in the group to withhold information inconsistent with the one proposed. We also know that strong leaders can and do stifle discussion. If groups want to prevent fiascoes there are steps they can take, which will improve the decision making process.

If anything, groupthink illustrates the processes that encourage the use of discussion to justify preconceived ideas. Groups have a tendency to focus on single solutions, when complex problems demand multiple reactions to difficult problems. Concurrence seeking produces groups that are robotic and “strain toward uniformity” rather than include the required complexity (Nemeth & Staw, 1989). Once the most influential individuals in the group opt for a course of action competing ideas have little chance of emerging. Arguments tend to become more one-sided as discussion proceeds, and since group members hear only one side, the discussion also tends to breed overconfidence.
It is not just cohesiveness that produces groupthink. Many marriages are very cohesive, but have built into their relationship acceptance of disagreement. This of course is also possible for other relationships and groups, regardless of their function or purpose.

3.2 The prevention of groupthink
If a group wants to come to decisions that are useful, effective, and correspond to the real world, there are steps to be taken to achieve that goal. Obviously a freer discussion in the group allowing for all opinions to be heard might avoid some of the disasters that have occurred in our past history. It would also be helpful if the leader did not state a strong opinion at the very beginning of the deliberation, but is helpful by welcoming all information and viewpoints. The group as a whole must also make sure that outside information is welcome and desired, and must provide room for critique. To prevent rash action the group could assign one or several people to play the “devil’s advocate”, i.e., to argue the contrary point at every step of the process. In that manner some of the weaknesses of the proposed action may be illuminated before action is taken. The leader could also divide the group into subgroups with different responsibilities, and then bring them together to confront their separate recommendations. Finally, the group could seek anonymous opinion that would offer no risk of rejection.

These points are summarized by Janis (1982) to for leaders to prevent encouraging groupthink:
1. Tell the individual members what groupthink is, and tell them about the major antecedents and consequent faulty decisions. Be open-minded, do not favor any position at the beginning of deliberations.
2. Encourage group members to be critical and skeptical, encourage doubts about any proposed solution.
3. Ask specific members to play the role of “devil’s advocate” i.e., questioning and arguing the opposite side of every issue.
4. Subdivide the group to evaluate the decision separately, then join the members together to compare evaluations.
5. In decisions affecting rival groups seek to understand all possible reactions by these groups. Is the proposed decision good for the group in the long run?
6. After the decision is made schedule a second “last chance” meeting to review, once more, any final doubts.
7. Invite experts, not members of the group to evaluate decisions, and have these experts attend separate meetings.
8. Encourage group members to consult with knowledgeable associates and have them report back their reactions.
9. Encourage groups that are independent from each other to work on the problem and to come up with their independent recommendations.
These are recommendations that should be adopted by decision makers at any level of society. Obviously the more critical the problem and consequences, the more important it is for the leader to prevent groupthink.

3.3 The power of the minority
History is replete with examples of the power of minorities on social practice and debate. While group influence is overpowering for most individuals, a minority can, by following certain principles, change group opinion. Think for a moment about all the social movements in history, where a minority, even a minority of one, swayed the powerful majority and caused a rupture with the past. The Copernicus revolution removing the earth from the central role in our planetary system is one example. Galileo was another minority of one who proposed the correct dimensions of the earth despite grave threats by the establishment. The right to vote for women was not a free gift by men, but occurred as a result of very brave women and men who in the minority fought for decades against all odds. The abolitionists who struggled to end slavery were long a despised minority in the US, but eventually their view won in a terrible civil war.

Minorities can have great influence when they follow several research-based behaviors. Moscovici et al. (1969; 1985) showed that three principles are of primary importance for success. The first is consistence. If the minority is consistent and does not waver in its proposed course of action, the consistency is likely to produce change in others. When the minority follows the majority it is most likely due to conformity pressures. However, when the majority changes its mind in the direction of the minority, it is because the majority has been encouraged to do so and to reflect more carefully its decisions by the consistency of minority opposition. When dissent occurs within a group, people sometimes become aware of new information, and think of new and novel ways to solve problems. A consistent minority may encourage creative thinking on task solutions. In the jury system a minority may sway the majority by being persistent and consistent (Nemeth, 1979).

Self-confidence shows that the minority believes in the validity of its arguments. If the minority does not consistently display self-confidence it raises red flags in the minds of the majority. A timid minority creates the impression that its objections are not valid and that the minority is incompetent. The self-confidence by which the minority addresses issues, on the other hand, influence and change positions (Nemeth & Wachtler, 1974). When the minority confidently and continually puts forward its point of view, it disrupts the conception of unanimity that the majority relies on for conformity. As the discussion proceeds in the group those in the majority who have censored themselves in pursuit of unanimity may begin to speak out more freely. Once such defection occurs, it starts a process of self-evaluation within the majority that causes more defections as a defecting person begins to have more credibility with the majority (Levine, 1989). Defection to the minority matters for both the minority and the majority by assuring the minority and casting doubt on the majority position. Conversely, the minority would also be influenced if one of their members joined the majority (Wolf, 1987).
Since practically any worthwhile position was once a minority position it is toward social minorities we must place our hope for improvement in society and groups. The majority will always conform or sit on the fence. Only the minority possesses the fortitude to continue working toward the cause they believe is right, whether to improve education, science or other facets of community life.

3.4 The cultural view: The phenomena of groupthink in other nations
Is groupthink primarily a phenomenon of extreme conformity processes in Western cultures? We have seen how critical situations (Bay of Pigs invasion of Cuba and the war in Vietnam) caused US decision makers to make faulty decisions with terrible consequences for millions of people. Are other cultures equally affected by groupthink? Do we have any reasons to believe they are not, or are other cultures perhaps even more conformist? Eastern cultures often stress harmony at the expense of individuality. Might the drive for harmony elicit even more efforts toward group cohesion at the expense of reality-based decisions? Nisbett (2003) found evidence in his study that groupthink is very significant in East Asian cultures. Every effort is made so participants in decisions and meetings do not “lose face” through unexpected conflict. Often there is no true debate in the group context. In Japan groupthink is so powerful, even in scientific meetings, that there is rarely any real debate that might be considered confrontational. In fact, Japanese science is under performing given the large amount of resources dedicated to research and knowledge (French, 2001).

How can we then explain the apparent contradiction that many Japanese companies do extremely well in international markets, and even dominate some sectors? Japanese managers have found a different way as they meet individually with decision-making participants prior to the meeting to obtain consensus. The meeting is not for decision-making, but to articulate the already obtained consensus. Decision-making in other cultures is obviously a complex matter. In recent years Western managers were employed by Japanese companies like Sony, supposedly to shake up management, to get rid of unwanted employees, and to make the company more competitive. Is there a change in Japanese employment philosophy? Whereas before a worker had essentially a job for life, this system of patronage is disappearing in the face of global competition, and the American model that simply states that profit is all that matters is adopted.

4. Leadership in groups
Effective leadership would include the idea of minority influence. Real minority influence is absent in many present day parliamentary democracies. In many European countries manipulation of voter opinion ensures electoral victories, and getting elected and reelected seems the only goal. However, to guide and mobilize groups toward worthwhile goals requires individuals who are willing to go against the grain, and set new goals outside the current social frame. To act otherwise is to act in favor of social stagnation.

Many studies have shown that when leaders work with a democratic style it provides group satisfaction and improves productivity (Spector, 1986). People tend to thrive and take pride in achievements under democratic leadership. This has led some societies to experiment with participative management (Naylor, 1990). However, if such management styles are just adopted to increase productivity as a form of manipulation, and do not involve real power sharing, benefits will likely prove temporary and dependent on surveillance.

4.1 The role of gender in leadership
Women have had to deal with special gender based prejudice when they seek or exercise leadership positions. There is much research that supports the contention that male and female leaders are perceived and treated differently. If a woman acts like a male, i.e. displays an authoritarian or forceful style of leadership, this is negatively evaluated (Eagly, Makhijani, Klonsky, 1992). While the negatively evaluation of female leaders is found in both sexes it is especially present in males. Males react more negatively to “bossy” styles that run counter to traditional female roles in society.
Gender roles have been in great flux over the past decades as more and more women enter the work force, and as gender equality is being sought in all arenas of economic and social life. In universities there are now more women graduates than men, and they make up 46 percent of the work force in the US. Still less than 1 percent of top managers (CEO’s) of the Fortune 500 (largest) companies are women, and only 4 percent of other top management positions are held by women (Eagly & Karau, 2002).

We can observe two kinds of prejudice against women. If women behave in a communal fashion, i.e. show they are concerned about the welfare of others, are warm and affectionate, then they are perceived as weak in leadership. On the other hand if a woman claws her way to leadership by behaving like men in similar positions, she is evaluated negatively since these behaviors are perceived to be contrary to how women are expected to behave. So how can a woman win? If she acts consistent to expectations she is perceived to be weak. If she is more agentic, i.e., is more assertive and controlling, she is acting contrary to societal expectations (Carli & Eagly, 1999; Eagly & Karau, 2002).

Acceptance of changes in gender roles does not occur overnight. Many of the perceptions are very complex and nurtured by all the agents of society, in education, in the political system, in sub conscious culture. They affect self-concepts and self-esteem in many ways. The prejudice against women leaders seems to be receding (Twenge, 1997), as the percentage of men and women who prefer male bosses is decreasing. There is also a growing acceptance of the idea that good leaders should have the traditional characteristics of both genders. Those who are most effective in leadership may well be those who are both communal (affectionate) and also possess agentic (assertive) qualities.

5. Are risky decisions more likely made in groups?
In a series of experiments Stoner (1961) learned that groups, as a collective, are more likely to produce risky decisions as compared to individually made decisions. The participants in the experiment were asked to give advise to others on various courses of action which varied in risk to the individuals. For example, should a person stay with a company that is secure, but only pays a modest salary or should he move to a company that is a risky venture, but might potentially have of a great pay off in the future? This decision is a problem that many face, and people vary greatly in their tolerance for risk.

But in addition to these individual differences Stoner also found a new phenomena of group behavior that he called the “risky shift”. Generally when people made decisions in groups they are more likely to recommend riskier decisions compared to when they evaluated the decision individually (Wallach, Kogan, & Bem, 1962). These studies revealed that the risky shift occurs when the group is seeking consensus after a relative brief discussion. Dissenting group members will often change their minds toward greater risk after such a brief discussion that perhaps does not allow for a consideration of all the consequences or an understanding of the risk.

The risky shift has serious implications for many group decisions. When the outcome is of great importance, perhaps it is best to follow the Japanese model and have people make individual decisions in pursuit of consensus. That is, when consensus really is not just another word for conformity sought in the individual consultation. However, as we frequently see in social psychology matters are not as simple as the earlier researchers thought.

5.1 Group polarization
Science is always self-correcting. It soon became apparent that the risky shift was not as simple as initially thought. Further research showed that groups did not make more risky decisions all of the time, it all depended on the initial views in the group. The group process produced more extreme decisions, i.e. groups tend to accentuate the already existing opinions. If these initial opinions tend toward more risk then the group process increases the risk level. If, however, the group predominantly expresses conservative opinions in the pre-decision phase, then the resulting decision would become even more conservative (Moscovice & Zavalloni, 1969; Myers & Bishop, 1971; Zuber, Crott & Werner, 1992).

Does polarization emerge in naturally occurring groups in society? Observe the conflict in the world where people from the same ethnic community, and with largely similar beliefs, are killing each other over dogma about ancient historical events. Terrorism does not occur suddenly without any antecedents. It occurs when people having grievances come together as is happening in ethnic communities throughout the world. As people with grievances interact moderating voices get lost since everyone wants to articulate these long suppressed hurts, and opinions become gradually more extreme (McCauley & Segal, 1987). Individuals isolated from facilitating groups would never commit the terrible acts of terrorism that we now see on a daily basis.

This group polarization effect has now been well established. In decisions and discussions the group favors more extreme viewpoints whether cautious or risky. Why is that the case? The literature provides us with several explanations. Group discussion elicits a pooling of ideas, which may include persuasive arguments not previously considered by group members (Stasser,1991).When people hear relevant arguments not previously considered, they sometimes shift their positions. So arguments or relevant information is important. Other times we change because we compare our viewpoint to that of others in the group. People will often not speak out until they can compare their views to that of others. This could be called ignorance of group opinion or “pluralistic ignorance” (Miller & McFarland,1987). Sometimes just hearing the opinions of others will produce a shift in the more cautious or risky direction.

The group is gathered in order to make a decision. Therefore the different arguments in favor of each course of action will have a hearing. However, since each side of the argument will present its viewpoint, more arguments will be heard from the side that had most of the initial support. Hearing more of a given side in an argument leads to the likelihood of others concurring, and since those presenting the arguments tend to have more extreme views, the majority in a group follows this polarization. To put it in other terms, the group discussion exposes the average member of the group to more arguments in favor of the position he already favored. Exposures to more arguments, and more extreme arguments by partisans of a given viewpoint, serve to strengthen the individual’s initial inclinations, and we therefore observe group polarization.

Does the mere exposure to a pool of arguments produce more extreme viewpoints in the direction of the initial preferred course of action? Support for this contention is found in a number of studies (Burnstein & Vinokur, 1973; Clark, Crockett, & Archer, 1971). Group polarization is defined as the tendency for group decisions to be more extreme than those made by individuals in the direction of the group’s initial positions. Results show that groups make more “extreme” positions than do individuals alone.

5.2 Group polarization and social comparison theory
The social comparison theory first advocated by Festinger (1954) suggests that we try to understand our world by comparing how we stand in relation to others (see also chapter 2). Such comparisons may have consequences for our identity and behavior (Stapel & Blanton, 2004; Suls & Wheeler, 2000). How do comparisons lead to group polarization? Most people think of themselves as favoring the more extreme “correct” position when compared to others. For example, if the socially valued course of action is to be cautious you may take an even more cautious position, whereas when the preferred action is risky you may advocate an even riskier position. People would be more cautious with the money of loved ones as that is considered the “correct” position, but perhaps more risky with money of their own.

The group context therefore becomes somewhat more risky for issues where a risky course is favored initially and somewhat more conservative on issues for which initial caution is considered the right decision. In the desire to be different from others we adopt more polarized viewpoints, but always in the “right” direction, that position which is favored initially by the group (Brown 1965; Ohtsubo, Masuchi, & Nakanishi, 2002; Rodrigo & Ato, 2002). This result is explained by the commonly accepted idea that people like to be liked and we want to be accepted. In the process of striving for acceptance we learn the values of our group. To be accepted and liked and viewed in a positive light, we support group values and show our leadership in the direction of the accepted opinion (Blaskovich, Ginsburg, & Veach, 1975, Zuber, Crott, & Werner, 1992).

5.3 The cultural view: Do some societies value risk more than others?
The initial studies on group polarization were carried out on US students, and the majority of results displayed the risky shift described above. But do all cultures favor risk? Western societies find risk taking is behavior to be admired (Madaras & Bem, 1968). For example, risk takers are seen as possessing more favorable positive traits. In one study risk takers were seen to be more creative, more intelligent, more socially confident, as compared to the cautious (Jellison & Riskind, 1970). The appreciation of risk taking comes from the broader capitalist culture that dominates thinking in Western societies. Such a culture actively encourages risk taking, and views as necessary the possibility of failure and loss. This may explain why we find more risk taking behavior in Western cultures (Gologor, 1977).

Whereas risk taking is admired in Western societies (Madaras & Bem, 1968) and risk takers are perceived in these cultures as more competent (Jellison & Riskind, 1970), cross-cultural studies of risk taking show that Africans value caution more as compared to Western respondents (Carlson & Davis, 1971;Gologor, 1977). These findings demonstrate again the importance of checking out all research results from a cultural perspective since we know cultural values to be of fundamental importance in any decision-making.

5.4 Polarization today
There are so many events that can be used as examples of the polarization effect. The most recent to come to mind is the furor throughout the Islamic world over the cartoons published in a Danish newspaper depicting the prophet Muhammad. None reacted to these cartoons for months, except for a small group of Danish Muslims. They got together, discussed the cartoons and eventually held a protest rally in Copenhagen. When that did not have the desired impact they decided to take the case to the Islamic world meeting with religious figures from Egypt to Saudi Arabia. This course of action inflamed opinions further. Only then did extreme opinions really begin to take over the debate with Danish embassies being closed down in Syria and elsewhere, the Danish flag burned, and a boycott of Danish products being enacted in the Arab world. This was followed by further riots and the death of scores of people.

This all started with cartoons that were initially thought to be very funny by the majority of Danes, and that were intended to attack the self-censorship thought to exist in Danish newspapers. The riots probably reinforced this censorship by reinforcing taboos, although the extremity of these taboos was a product of polarization. The gap between civilizations was not decreased as a result of this process in group polarization as moderate voices were drowned out by the clamor of extreme opinions. Modern means of communication like the Internet are not moderating voices since people will primarily select the information they agree with, and ignore other perspectives. Hate groups make good use of the Internet, and the group polarization effect represented there simply feed extremist views.

A dialogue between varying viewpoints may help, but not if it is confrontational or argumentative. Nothing but polarization occurs as a result of argumentative interaction. A truly multiethnic worldview would accept not only that differences exist, but also that these are desirable (Van der Veer, 2003). The absolute truth is not present in any viewpoint, hence respect for sincerity, and honesty and a complete right to differ on any topic within broad humanitarian values is required.

6. Conflict or cooperation in groups
Whenever two or more people gather there is an opportunity for conflict. That is true for groups as small as couples, as well as nations. Often our goals and needs clash, and at times goals are totally incompatible. If we examine the world just in our lifetime, or even the past few decades, we see everywhere the distressing results of conflict and destruction. At the smallest group level of marriage the divorce rate in the Western world is distressingly high approaching 50 percent. Perhaps that has something to do with the changing gender roles and the inability of people to adjust.

The murder rate in the US has justified it being called the murder capital of the civilized world. When we examine violence at the level of nations, warfare has not only increased in severity and brutality, but also in frequency during the 20th century (Levy & Morgan, 1984). There is nothing to encourage us to think that this pattern of violence is changing in the future, only the combatants change. Social psychologists, along with specialists in other fields, have been involved in research that aims at addressing these problems and learning how to resolve conflicts peacefully.
Game theory, as exemplified in the prisoners’ dilemma game, has been used extensively as a framework for the study of conflict in the social psychological laboratory to understand how we can increase cooperation and trust.

Competitive actions increase the level of distrust until conflict ensues (Batson & Ahmad, 2001). When two systems are locked into an arms race the dominating fear is that the other side will take advantage of any weakness. Consequently arms are stockpiled to the point of absurdity. We now have in the world enough nuclear weapons not only to destroy the world once, but many times over. The arms race is a loss for everyone as is any conflict. This monster, which dominates the economies of most nations, eats up massive resources that could be used for the betterment of the world.

Some research has suggested the efficacy of a “tit for tat “ strategy in order to encourage cooperation (Axelrod, 1984; Parks & Rumble, 2001;Van Lange, Ouwerkerk, & Tazelaar, 2002). This strategy of conflict management involves a group taking the initial step toward cooperation and thereby inviting reciprocation. Tit for tat requires us to respond to the opponent’s reaction. If a cooperative reaction is elicited then ‘tit for tat” calls for rewarding the opponent with more cooperation, and thereby build more trust. If the response is not cooperative then the option remains to escalate the competition. One can only wonder where the world would be if such a conciliatory strategy had been employed in the past. Cuba has made many conciliatory gestures toward the United States over the past decades, but each has been received with disdain and more conflict. However, a strategy based on threats has been shown to be totally ineffective (Deutsch & Kraus, 1960; 1962; & Turner & Horvitz, 2001).

6.1 Negotiating and bargaining toward a solution to conflict
To end any conflict it is necessary to negotiate. Unless both parties come to an agreement there is no way to end the conflict. That is one reason why unilateral decisions by a powerful actor will not work in the long run. The state of Israel is in longstanding conflict with the Palestinian people who inhabited the space upon which Israel is now located. Israel has decided to withdraw from some, but not all of the territory that belonged to the Palestinian people prior to the 1967 war. In support of this they are building a wall the length of the country to effectively partition what they want to leave to the Palestinians. This wall not only places many Palestinians in second-class citizenship within the state of Israel, but also makes a viable state for the Palestinians almost impossible. Unilateral decision-making will probably result in a conflict that will be with us for decades to come.

Negotiations require people to communicate with opponents directly, and are based on the idea that there are solutions that are acceptable to all parties to the conflict. The ideal form of negotiation or bargaining will take into account the most and least important issues to each party. In that way each party compromises more on issues of less importance but still of some importance to the opposing side. For example, for the Palestinians the return of refugees and the status of East Jerusalem as a capital of Palestine are probably among the most important issues in the conflict. A viable peace would seem most important to Israel. Giving up territory in exchange for peace is then the only viable option. The devil is in the details. When we distrust the other side we develop biased perceptions of the opponent, distrust their proposals, and overlook the obvious interests that they all have in common (O’Connor & Carnvale, 1997).

However, it is not always easy to identify such integrative solutions. Distrust makes it nearly impossible for people to see communalities in search for solutions. Intractability calls for the services of mediators trusted by both sides whose role is to identify integrative solutions beneficial to both sides for a negotiated end to conflict. Such mediators have been at work in nearly all past international conflicts since war rarely results in any decisive victory. The mediations have had varying success. Some conflicts like a union’s request for pay raises can be bargained since both management and workers can identify solutions that would benefit both sides. Conflicts based on deeply held values are much more difficult to mediate.

Summary
Membership in groups is central to our lives, and therefore also to the discipline of social psychology. People join groups because membership entails many benefits related to survival and other social needs. There are those who would propose an evolutionary need for groups, as people in isolation often experience severe psychological stress.

A group is two or more people who are in a state of interaction. Crowds are not groups, nor are other gatherings that do not have the inherent property of interaction. Group structure follows quickly upon formation of a group as leader roles, group norms, and status of members are swiftly identified. Generally people seek out like-minded people when joining groups. Most people want reinforcement of their beliefs and attitudes and do not seek challenges to their deeply held worldviews.

Groups define the roles we play. In work groups these are often specified to a degree that allows for little ambiguity. Clearly defined roles produce satisfaction and improved production. Unfortunately, sometimes roles take over the identity of the individual as we see in the Zimbardo study. In that study on prison simulation, and in real life, guards became brutal and prisoners submissive in response to the roles imposed.

Gender roles are in a state of constant change. In recent decades we have observed some improvement in women’s struggle for equality, but the process is slow (Eurostat 2007)∗. That of course does not of itself overcome the long-term effects of culture. In capitalist societies progress in women’s rights has followed major social changes, and the struggles of brave women and men. Gender conflict remains in all societies due in part to the greater demands made on women who work outside the home, and the strain to adjust to changing roles and demands at home.

A strong feeling of friendship is the most important characteristic of cohesive groups. Such groups tend to be more effective and less dysfunctional than groups manifesting conflict. Some groups are only temporary; others are for life especially those that have common purposes and goals. When members accept goals and like each other the group is likely to be cohesive.

Group membership is important because people at times act different when in groups. The research on social facilitation shows that groups energize people on simple tasks leading to higher performance levels, but hurts performance on complex tasks. On complex tasks evaluation anxiety may be diverting or distracting the individual away from task solutions.

Crowding is experienced as stressful and therefore different from physical density. At sporting events crowding may intensify feelings leading to hooligan behavior on the part of fans, and in other situations to lynching in the US. Crowding is therefore a subjective feeling of not having sufficient space, which can produce sensory overloads and feelings of loss of control. However, if one is distracted as perhaps when watching a favored sport team, the physical density of the fans may not be stressful or experienced as crowding. On the other hand a long trip on a bus may produce the feeling of not having sufficient space although among fewer people. The research indicates that in some cultures physical density experienced as crowding in Western societies is not experienced as such in Asia. The Asian cultures have developed elaborate cultures of courtesy that allows people to live with high density and still maintain necessary distance and privacy.

We all know those in our task groups that loaf. Social loafing is manifested when individuals give minimal efforts. It occurs mostly in situations where individual efforts cannot be identified, or the task has little meaning. When the individual is submerged in the group, task behavior may suffer as a consequence. Social loafing is greatest among strangers, least among friends and family where there is a sense of shared responsibility. When the task is meaningful some individuals will compensate for others inadequacy, and step up individual contributions.

Life has demonstrated cultural differences in social loafing. In all cases examined, collective farming in the former socialist societies did poorly as compared to private farming. At the same time we have the example of the socialist Kibbutz system in Israel that out produced private farming. Clearly it is not social production that leads to loafing, but rather the feeling of lack of ownership of production and management. Differences within society reveal that women, who have more communal feelings, are also less likely to loaf.

Overall, when individual efforts are appreciated, known and rewarded, when the task is challenging, and the group goals accepted, social loafing is less an obstruction in society. These findings can be applied to work situations by ensuring sufficient surveillance of work on simple tasks, and individual evaluations. Open spaces are encouraged for work on simple tasks. On complex tasks open spaces may be distracting as such work requires more privacy.
Deindividuation is where the individual experiences a loss of identity, and the normal restraints that come from having acquired personal values. People do things in groups they would never do when alone. Le Bon referred to this phenomenon as a form of social contagion where impulsive and destructive behavior takes the place of rational evaluations. When in a situation of deindividuation people are less concerned about the evaluations of others, partly from the anonymity afforded by large crowds. Many negative behaviors may result from deindividuation including suicide baiting, lynching, and war.

In large crowds deindividuation is more likely, and conformity greater. If the norms are violent we observe the destructive consequences. In war the controlling parties do all that is possible to deindividuate individual combatants. In some societies paint is worn to reduce individuality and evaluation. In modern societies uniforms play a similar role of reducing normal restraint toward brutality. Therefore, if we are interested in reducing deindividuation we have to find some way to have the combatants focus inward and become more self-conscious. In the process of individuation and self-consciousness, personal values will play a larger role in restraining unethical behavior.

One important area in the social psychology of groups involves an understanding of group decisions. Are these superior to individual decisions; are two heads better than one? If we rely on expert opinion we may avert process loss, and the kinds of communication problems that interfere with good decisions. However, under some circumstances group decisions are worse than individual opinion, worse than making no decision at all.

One problem of the group process is that generally only information known to all group members is shared in making the decision, and novel viewpoints are held back. It is easier to discuss commonly shared information, but perhaps the novel idea is key to a competent decision. One way to avoid the problem is to ensure that the group has sufficient time, as novel solutions would generally come after the common information is shared.

Groupthink has had great impact on some disastrous foreign policy decisions in the West, and perhaps similar decisions can be identified in other countries. Groupthink occurs in highly cohesive groups when they are under stress to achieve consensus. It involves faulty thinking based in part on stereotypes of opponents, feelings of moral superiority and invulnerability. The prevention of groupthink involves good leadership that not only allows, but also seeks complete free discussion, and is open to all points of view. Groupthink is mindless conformity that seeks to justify preconceived ideas.
However, minorities make history. Research has shown that when minorities display consistency in holding to a course of action, when they display self-confidence, and when they can elicit defections from the majority, they can indeed change history. Effective leadership comes from those who are willing to go against the grain. Also research shows pretty conclusively that democratic leadership not only is most satisfying to followers, but also is most effective in task completion.

Women’s roles have changed drastically in the last decades from being homemakers to winning a place in the larger industrial society. The world is changing, but women often find themselves in a double bind. If they act in more traditional communal ways they are perceived as weak in leadership, if they act in more masculine agentic ways they are perceived as less feminine. Some research indicates that the best leadership in society comes from those who can combine these traits.

Can we find examples of groupthink in other cultures and nations? There is great evidence of the existence groupthink in Asian cultures. It is thought by some that there is no value in holding decision-making meetings in collectivist cultures as decisions are made prior to any meeting. On the other hand there is more evidence of pre meeting consultation in for example Japanese companies, so the actual meeting is just to make formal the consensus already established. The real question is: is the process of consultation just another way of seeking conformity and agreement with the preconceived ideas of the leadership? Perhaps globalization makes cultural differences less relevant. As more nations adapt to globalization where the profit motive is the overriding concern, cultural differences become less important.

Are group decisions more risky? Yes, when groups seek consensus the risky shift in the direction of more risky decisions occurs, at least in the US. However, later research on group polarization shows that for most interaction the group decision will be primarily more extreme in the direction of the already dominant opinion whether risky or cautious. The reasons include the persuasion argument that shows that exposure to the quantity and persuasiveness of dominant arguments moves group members toward more extreme views. Also the social comparison argument shows that we like to compare ourselves to others, and to be ahead of others toward the “correct” position. There are some cultural differences with Western societies producing more risky responses and less so in some other cultures examined. Again globalization works toward more uniformity of values that may erase any cultural differences in the long run.
The world shows many examples of the devastating polarization occurring in attitudes and opinions prior to our wars and conflicts. Social psychologists have tried to address these issues in laboratory simulations utilizing game theory. These simulations support the strategy of taking initial cooperative steps, followed by rewarding cooperation by opponents. The initial cooperative strategy is most successful since threats have no useful function. For conflict to end the parties must find ways to communicate. Finding integrative solutions, which benefit both parties, is at times both difficult and complex. When the issue is about land or deeply held values, compromises through negotiation are not a likely outcome. On other matters like economic disputes, negotiation may bring about settlements that end conflict and provide mutually acceptable solutions.




Being Human. Chapter 7: Processes Of Social Influence: Conformity, Compliance And Obedience

Now imagine the following graduation exercises at a typical North American university. They were designed to create a memorable occasion with the aid of majestic music, ritual words of graduation, and students being uniformed in their academic regalia. It is also, to the social psychologist, an opportunity to observe the forces of social influence up close. Somehow, some 4,500 students from the Oregon state University in Corvallis, Oregon, manage to have their individual degrees delivered with an almost factory like efficiency that perhaps represents best U.S. society. At the same time, the faculty are dressed in their medieval academic regalia, and are without doubt authority figures to many. Students obey directions, even standing up to two hours in line. The students line up in a particular order and conform to the requests, which determines the sequence in which they receive their prized document. Then they follow in majestic formation the Scottish band that precedes the parade through the university campus. When all are seated in the university stadium, with the president, deans, and honored guests on the podium, the ceremonies begins. There are places for the audience to participate. Standing up for the national anthem produces universal conformity. The students and faculty also know that women may keep their hats on, while men, with one exception, bare their heads. There is also time to graduate military officers with a holy oath to defend the country from all enemies, foreign or domestic. This is followed by a roaring display of approval from the tens of thousands of family and friends. The applause from students and faculty is nearly universal. However, the individual who does not bare his head during the anthem evidently does not approve of the military and may be observed sitting with his hands folded. Several of his neighbors now apparently feel the same way, as they also refrain from clapping. A minority of one seems to have influenced the behavior of those who can observe his nonconformist behavior. Then the alma mater is sung where the audience pretends to be in love with a non-personal entity, the university. Here the president and deans outdo themselves in demonstrating their fidelity to the institution even though many are relatively new to the university and must quickly have adopted these new feelings.
Could you imagine such a ceremony in for example a random Norwegian or Dutch university?

The above-sketched picture illustrates some of the processes of social influence, the subject of this chapter. In described situation we can observe people comply with the requests of authority figures, being persuaded by the audience to stand at various times, take their hats on and off, yell their approval of the military. The experience reflected informational conformity, for example responding to the need to know where to stand in the line. It also reflected normative conformity as in the universal rising for the anthem. Not one person refused to do that so the national anthem must have exerted a great deal of social pressure. The graduation ceremony also demonstrated obedience to authority, reinforced by the status of those leading the events, and academic gowns with symbols of status, authority, and expertise.

None were hurt by the conformity on display. Everyone obtained his/her degree in an efficient manner. Of course they all would anyway whether they participated or not, since they had completed the requirements for graduation before the ceremony. Still, other than the mindlessness it promoted, there was no real harm done. Some might even have benefited in participating. To have public recognition of achievement is experienced as very rewarding by many.

Not all conformity has such beneficial results, as we shall see. Were those who participated in the massacre at My Lai (Vietnam) only following orders? Or were the war criminals at Nuremberg excused by their obedience, in particular Adolf Eichman? The past century has been marked as a time of cruel and repeated genocides. We saw this cruel obedience in Cambodia, we saw it in Bosnia, and we saw it again in Rwanda. And now the same cruelty is being played out in the Darfur region of Sudan in Africa, and countless other places. Are people really that cruel? Is it in human nature to behave in such manifest barbaric ways?

In the US they say, “you have to go along to get along” indicating that conformity is essential to successful social functioning. Often conformity is of the type manifested at the graduation ceremony where people are told in indirect or more or less subtle ways as to what is appropriate behavior. At other times people are commanded to obey by those who have the appearance of legitimate authority. In fact all genocides appeal to and are sanctioned by the authority and ideology of the prevailing society. Usually there is preparatory indoctrination that allows the participant to feel that the genocide is justified and the right thing to do.

In this chapter we shall examine the whole range of social influence, from that which is an expression of social solidarity to those behaviors that reflect destructive ideology and obedience to evil demands. Are people who participate in evil just evil people? Or is it within the capacity of most people to behave in cruel ways? Is obedience to inhuman demands a consequence of unleashing the evil in all of us, a consequence of being human and therefore normal? To what extent does the power of the situation define whether we follow or not the slippery slope to participation. Social psychology has some answers.

1. Social influence: how we change attitudes, beliefs, and feelings
Social influence is the umbrella term that refers to how our speech, nonverbal behavior and actions change others, or reinforce their existing beliefs. We meet with this phenomenon every day. Some bank wants you to use their credit card. Fashions also change and clothing manufactures spend considerable money to convince you that the new fashions are cool, and you should buy. Your boss at work wants you to perform better, and you yield in hopes of promotion or in fear of your job. If you are in the military your options are few, you are given an order, and must obey. These examples demonstrate the presence of the three major types of social influence.

Conformity is where the individual changes his behavior as a result of pressure from others. Sometimes the pressure is obvious and explicit. At other times we have internalized such pressure that few would risk social disapproval although not many can produce good reasons for the behavior. Students become social drinkers as a result of peer pressure, in order to fit in. At times the pressure is toward binge drinking with very unfortunate consequences on health or accidents. Conformity is the tendency to change beliefs or behaviors in order to match that of others (Cialdini & Goldstein, 2004). Most Americans hear conflicting messages from our society about conformity. In a society that prizes individual ruggedness it seems somewhat effete to conform. The Marlboro man who sold cigarettes to millions exemplified the ruggedness of the American male while he rode his horse across the US movie and TV screens. Many yielded to this image and conformed by smoking and it has cost millions their lives. The rugged individuality that appealed to so many was employed to create addicts who did not have any individuality. Eventually the Marlboro man who acted in these commercials died himself of lung cancer.
This episode shows however, the ambivalence of American and perhaps other societies. Conforming is essential to some achieve some degree of social harmony whether in the US, the Netherlands, Norway, or other countries. At the same time we do not want our children to become binge drinkers just because everyone else is doing it. The struggle over involuntary prayer in school in the US has to do with this debate over conformity influences. Are children in other countries exposed to similar pressures to conform? When children are small, adults in charge produce many subtle pressures, in particular a child’s teachers. Is prayer in school a good practice that encourages moral behavior, or is it compelling children to conform in religious beliefs. Does the absence of prayer infringe on religious freedom if the majority wants prayer, or do we have a responsibility to protect the minority from such coercive influences?

Compliance on the other hand is when an individual responds to a specific demand or request from others. Compliance is usually associated with unequal power relationships. You might comply with a request from your parents to study harder and get good grades. If you do not comply there is the implicit possibility of withdrawal of parental approval or financial support. Often in life we are faced with explicit demands that require some change in behavior. However, it is possible to change your behavior while not necessarily your attitudes and feelings. You may work harder at schoolwork and improve your grades while feeling you are still wasting your time in college. At the moment complying seems the best option, until something better comes along.
Obedience is a form of social influence where the individual yields because an individual with power commands you to perform in a particular way. The boss may say, ”I am telling you to improve, I am not asking you”. In the direst circumstances we see obedience at work in all genocidal behavior. Usually genocidal acts are carried out with the support of legitimate authority, by group cohesion, and the perception that the victims are different in a significant way. In Rwanda it was the Tutsi’s, in Darfur it is the non-Arab population, during the cold war it was the communists or anti-communists depending on where you lived. Being able to categorize people as different allowed some to participate in horrible behaviors that destroyed communities, and the souls of the participants. One has to wonder to what extent the delayed stress syndrome, particularly manifest among veterans of the US war on Vietnam, was a consequence of participating, following orders, in the horrible destruction of human life.

As we have also noted sometimes conformity can be beneficial. At times we just do not have sufficient information, we are unsure, or find ourselves in new or unsettling circumstances. We then look to others for some idea of what to do (see also section 7.3). If we did not live with some inhibitions what kind of world would we inherit? When people became angry they would just lash out, in theaters the boorish people would talk loudly, and everyone would push to be in front of the line. Conformity has civilizing effects and helps produce social harmony. As the saying goes: “When in Rome, do as the Romans”. Conformity can also kill the soul through mindless behavior. At the end of the day we make the decision whether to cooperate or participate without reflection (Henrich & Boyd, 1998).

We shall see in this chapter that people would commit acts in a web of social influence that they would never do by themselves as an independent human being. We have seen extreme human behavior such as mass suicides under certain conditions (Ferris, 1997). The so-called Heavens Gate cult committed mass suicide together in 1997. Years before a religious cult led by a reverend Jim Jones committed collective suicide in Jonestown, Guiana. At that time several thousand adults lined up with their children to receive a cool aid drink spiced with cyanide, all under the direction of their leader who took a similar route having a follower shoot him. How can we explain the efficient machinery that produced the holocaust, the atrocities in former Yugoslavia, the massacres in Vietnam? The army company that murdered the civilians at My Lai where not sadists, but normal American draftees who responded to an order to systematically murder everyone in the village (Hersh, 1970).

These are of course extreme examples, but would we have behaved differently? In other words does conformity come from social pressures that are overwhelming to all of us in the same circumstances? Would we all, given the same strong social pressures from other group members, and the power of charismatic leadership, have conformed in the similar circumstances? Is conformity normal?
On the other hand we can also observe from history the good that comes from conformity under very different circumstances. For example India freed itself from the British Empire in that a substantial minority practiced nonviolent protests. Using the same ideals we saw the civil rights era arrive in United States as a result of thousands of Blacks conforming to the principles of nonviolent protests. Many were beaten some were killed, but at the end of the day Black people had more rights and fairness in their lives.

2. The ideomotor effect: William James
Psychologists were from the beginning interested in conformity as the early work of William James (1890) demonstrates. The famous psychologist noted that behavior was often subconscious, and that just thinking about something made it more likely that a person would engage in that behavior. Have you ever sat with your family and someone yawned, and you also felt compelled to join in yawning? Some behaviors are literally copycat behaviors where we unconsciously mimic the behavior of someone else. James called this the ideomotor effect.

This unconscious mimicry of postures, mannerisms, and facial expressions was studied by Chartrand & Bargh (1999). In their study they observed participants mimic simple behaviors like rubbing feet or face initiated by a confederate. They called this mimicking behavior the chameleon effect. They wanted also to understand why we develop this tendency to subconsciously mimic others. The experimenters thought that perhaps those who had a high need for others, a desire for approval, were more likely to conform. This hypothesis was confirmed in several studies (Chartrand & Bargh, 1999; Lakin & Chartrand, 2003). In fact the behavior is reinforcing the person being mimicked, and we like more those who mimic us than those who do not. These positive feelings also spill over into other behaviors as investigators found that when people are mimicked they are also more likely to engage in pro social behaviors like donating money to a good social cause or leaving a large tip for a waitress (Van Baaren, Holland, Kawakami, & Van Knippenberg, 2004) At some level we find it flattering when someone copies our behavior, and we find great enjoyment in seeing a young child speak like his father, or otherwise adopt the mannerisms of an adult.

3. The classical studies in social influence
Conformity was among the earliest social phenomenon studied by social psychologists. The first and most influential study in his day was the study on the auto kinetic illusion performed by Sherif (1936). The effect was demonstrated in a laboratory with small groups of people. The participants would enter a dark room in which a steady light was displayed on a dark wall. Although the light in fact never moved people experienced the light as moving after gazing for a period of time. How do groups influence this illusion of light movement where in fact no light is moving? In reality the light appears to move because there is no stimuli to fix or anchor the light as a reference. Sherif wondered whether other people would serve as a reference and establish some norms for estimated movement. Initially the participants were asked to estimate the length of this illusionary movement. Individuals varied in their estimates, some saying a few inches others more. Sherif then moved the participants together in a room and asked them to call out their estimated (but illusionary) light movements. The question was to see if the estimates of movement would tend to converge in the presence of others, and therefore we might observe how group norms develop. This in fact happened. The varying individual judgments very quickly formed into a group estimate or norm. This is called the auto kinetic effect. Further this experimental norm had apparently long term effects. When the participants were called back a year later, their individual judgments still reflected the previously established norm (Rohrer, Baron, Hoffman, & Swander, 1954).

4. Informational conformity
Why would the participants move toward a group norm? In the dark room they saw the illusion under very ambiguous circumstances. Having nothing to rely on other than the judgments of others they began to form a more or less collective judgment. We are social animals and our ability to get along with others is reflected in our behavior. At times conformity is a form of information seeking, particularly when the conditions create uncertainty and provide no direct answers. Other people can be a source of what is correct, or might be proper behavior when we ourselves are uncertain (Deutsch & Gerard, 1955). The influence of others on our behavior has been demonstrated in many other studies (Baron, Vandello, & Brunsman, 1996; Levine, Higgins, & Choi, 2000). Often this is not just mindless conformity, and people come to believe that the group estimate is correct. Not knowing what is correct, participants come to an acceptance of the correctness of the group norm that developed over time. Informational conformity may serve many useful functions in providing some framework for decisions in ambiguous situations.

There are occasions that are more complex in which we do not know what is a correct response. Some situations are much more serious than establishing the norm for the auto kinetic effect. Killing in drug gangs is a form of conformity. After hurricane Katrina the murderers living in New Orleans were distributed all over the country and for a time did not have their customary network to determine “correct” killing behavior. They were like the participants in the Sherif study, without any guiding norms. The murder rates dropped significantly even though those likely to commit murders were still alive. However, after a period of time the violent men reconstituted their violent gangs and their norms, and the killings resumed. In violence people also look to others for what is proper behavior. Once the shooting had started during the My Lai massacre the other soldiers found it easier to participate. Many soldiers had powerful reservations about the morality of their behavior. In most cases however, the issue was decided in favor of conformity. In ambiguous situations where people lack information they will look to peers and leaders to see what is appropriate. Lt. Calley and the first soldier who obeyed provided that information.

In recent years informational conformity has been demonstrated in other ways. In law enforcement the accurate identification of suspects is extremely important. Unfortunately our ability to identify is often less than accurate as we shall see in chapter 12. When this process is carried out in small groups of three or four where confederates of the experimenter unanimously gave the wrong answer, participants responded with the wrong identification 35 percent of the time. If the issue was perceived as being very important the conformity to the false group identification rose to 51 percent. When the task was difficult and involved recognition memory the groups answer converged as in the Sherif study (Levine, Higgins, & Choi, 2000). The direction of the conformity depended on the frame established by the experimenter. When the frame in the instructions was “risky” the judgment norm became more risky, but when cautious the judgments became more cautious.

This finding has of course important implications for our social world. For example the Bay of Pigs invasion of Cuba by the US evolved out of misinformation which had been adopted as a norm by the decision making group. Essentially this norm said, “all you have to do is send 1500 soldiers and the Cuban government will collapse” (see also the discussion of groupthink in chapter 6). Similar miscalculations were made by Hitler and his cronies in the attack on the Soviet Union during World War 2, and more recently by the Bush government decision makers in the war on Iraq. In the case of the space shuttle “the Challenger” informational conformity also led to disaster. Despite warnings that there might be equipment failure the decision makers looked to each other, and under pressure to perform made a disastrous decision that led to the loss of the spacecraft and all on board (Schwartz, 2003; Schwartz & Wald, 2003).

4.1 Mass hysteria and informational conformity
When people are in crisis during natural disasters or war they will look to others for how to behave. Often in these situations people have no idea what is going on or how to respond (Killian, 1964). In crisis the need for accurate information is very high, we look to others to find some consensus upon which to base our judgment. In 1938 a curious expression of mass hysteria occurred in the US when the famous actor Orson Welles performed a play based on the science fiction book War of the Worlds by H. G. Wells on the radio. It was performed on Halloween night a time when people’s fantasies were at a peak, and Wells was a very accomplished and convincing actor. The play depicted the invasion of the world by inhabitants of Mars, and the fictional drama was so effective that at least a million listeners were convinced that the earth was under attack by extraterrestrial beings. Several thousands actually got in their cars in an attempt to flee, although it was not clear where they would go (Cantril, 1940). In following up on the mass hysteria Cantril learned that many of those affected had listened to the program with other family members and friends. They then turned to each other to determine what to make of the situation, and being worried and seeing others worried added to the feelings of panic. Many thought they were about to die.

There were of course others who were better prepared. Some had listened to the whole program and knew from the disclaimer at the beginning that it was only a play. Yet others decided to call public services like the police department and learned in this way that there was no danger. Yet others looked at the internal evidence of the play and found reasons to doubt. Nevertheless in this simulated crisis where many did not know what to believe they began to believe they were in the throes of a real disaster, the end of the world. Rather than look for some evidence to disconfirm which was after all a very unusual situation, they tried to interpret the events to fit the image that had formed in their minds. They engaged in mass hysteria, and thereby also reinforced this hysterical view in family, friends, and others.

Such emotions can pass rapidly through a crowd. Le Bon (1896) spoke of a contagion effect. People by themselves may behave in rational and civilized ways, but in crowds they become barbarians. We have seen so many examples from history from national crowds getting all whipped up with fervor in times of war, to the behavior of lynch mobs hanging innocent victims. Populations support with passion their national governments until the reality of grievous losses begin to affect the collective mind. This was what happened in the US during the war on Vietnam. During the world cup football we can see similar, although more innocuous behaviors, where spectators get caught up in national passion, even though it is after all just a game. Even when other people are not well informed we, in our ignorance, will often adopt this behavior with tragic consequences in some cases, and mindlessness in others.

A similar phenomenon is the so-called mass psychogenic illness. Here people begin to manifest similar physical symptoms even though subsequently it is shown that there are no physical causes for the illness (Bartholomew & Wessely, 2002). In one school a teacher began to experience headaches and nausea after smelling gasoline. Soon students experienced similar symptoms, and ambulances were called and the school was shut down. Subsequent investigations showed that there was absolutely no cause for the symptoms or the alarm. This example also manifested a form of informational conformity in the presence of crisis and ambiguity (Altman, 2000). Today we have the additional problem of speed of communication in our global community. In the ancient times populations were limited in travel and means of communication, so hysteria had a lower effect on the rest of the world. Today hysteria can be spread in seconds through mobile telephones, television, and computers, while our populations have not grown in healthy skepticism.

4.2 Ignorance and informational conformity
In any country governed by a rigid set of values and enforced by punitive power one might observe other forms of mass hysteria. In the US during the cold war we experienced a time known as the McCarthyite period, a time of mass hysteria and conformity. Conformity to the norms of the day allowed for the witch hunting which followed and could only have been brought about in an atmosphere of manufactured crisis and political ignorance. Thousands of people were accused of unorthodox political beliefs and behaviors. Anyone who had opinions that were in favor of social justice was smeared a communists, this was particularly true of people like Martin Luther King who led the struggle for civil rights. Many thousands lost their jobs, and writers and performers were black listed in Hollywood. An atmosphere of suspicion and modern day witch hunting dominated the political and cultural life of the U.S.

This mass hysteria was in many ways similar to that observed in other situations of crisis. We have taken note of the violent responses to the cartoons of the Prophet Mohammed published initially in Denmark in 2006. The sectarian genocide in the Middle East and indeed other parts of the world partake of similar ignorance and manipulated hysteria. In any society where large numbers of people are ignorant of fundamental information about history, geography, and political knowledge, there exists the possibility of conformity to informational norms produced by mass hysteria. Any crisis can be misused to produce genocidal behavior toward political, religious, and ethnic minorities.

4.3 What conditions produce informational conformity?
From the preceding examples we can observe some conditions that are likely to facilitate informational conformity. The more uncertain one is in a given situation, the more he/she will look to others for correct responses (Allen, 1965; Baron, Albright, & Malloy, 1995). The young soldiers at My Lai and the child soldiers in the Army of the Lord found themselves in crisis situations and both perpetrated terrible atrocities in their respective zones of combat. In Sierra Leone, Africa, child soldiers would routinely cut off arms and legs of totally innocent civilians. How could children do that? Do you think it is in the nature of these children to do that? Or did they have adults who demanded and modeled that behavior in a situation of crisis where the child soldiers’ life was in danger?

Ambiguous situations in crisis are ideal for creating informational conformity, as the participants have no information other than that which is provided by the handlers. In Srebrenica (Bosnia), 1995, thousands of young Muslim men were summarily executed by their Serbian enemies in one of the significant genocidal acts of the war. The perpetrators were in civilian life ordinary people who would not normally commit aggression. In crisis situations people do not have time to sufficiently reflect on the morality of behavior and too often look to others to define what is proper behavior.

In general, people who have status, expertness and power are more likely to be role models for others. When at an accident we look to emergency experts to guide us, or at least those among the spectators who seem to know something about first aid and emergency procedures (Allison, 1992; Cialdini & Trost, 1998). Sadly too often so-called experts have turned out to be misleaders, and have led us down the garden path to disasters. In any decision there is so much that is unknowable, and dogmatic reactions seldom serve any group of people. Despite the insanity of mutually assured destruction we are still on the edge of nuclear catastrophes. What if the experts are not right and someone really thinks that an advantage may be gained by a preemptive strike. The losers in all wars have time to regret that they followed leaders who were supposed to know how to make good decisions, but in the end brought ruin.

In informational conformity we go along with demands or behaviors because we want in some way to be right. The more we are connected to the group providing the information the more likely we are to trust and to follow the directives of the leaders. If we trust our religious leaders and prize our membership in a religious society we may accept information that in other circumstances would seem absurd. We have already noted the cults that committed suicide, and each country will have similar examples of conformity. In informational conformity we usually accept the influence extended and change not only our behavior, but also our minds (Griffin & Buehler, 1993). Informational conformity is therefore a rational process where we conform in order to behave in ways that reflect the group’s views of a situation.

5. Normative influence: The Asch studies on group pressure
In the Sherif auto kinetic experiment the participants were faced with a very ambiguous situation. They found themselves in a completely darkened room with a fixed light that appeared to move. In this situation it is then only natural to look to others, and as we saw eventually the participants came up with a group estimate or norm. What would people do in another experiment where the stimuli were not ambiguous? An attempt to create an unambiguous situation to study conformity was carried out by Asch (1951, 1956, 1957).

In his studies participants gathered by arrangement in the psychological laboratory and were told that they were participating in a study on perception. It was a relatively simple task. They had to choose from a card with three lines of differing lengths the one which corresponded to a line on a second card. Perceptually the experiment contained no ambiguity, and participants nearly always made the correct choice as individuals. However, in the experiment with seven participants, all unknown to the actual subject, six were confederates of the experimenter. After the first two trials passed where everyone made the correct choice, on the third trial all six confederates, one after another made an incorrect choice. It was always arranged that the subject would be last to make a selection after listening to the unanimous incorrect choices.

After this first very incongruent experience the confederates and participant went through 11 more trials with the experimental collaborators each time calling out an obviously incorrect choice. There was no ambiguity here. The line on the comparison card clearly matched one of the lines on the card with three lines. What would you do, would you start to think that something was wrong with your eyes, or would you report what you actually saw? In this classical experiment participants conformed on some of the trials about 75 percent of the time, and overall about 37 percent of the critical trials. It is generally believed that Asch studied normative conformity in his experiment, based on the participants’ desire to avoid disapproval and being liked. Normative conformity also includes the desire to avoid harsher sanctions such as being ostracized from the group.

This level of conformity thinking surprised Asch since it raised questions about our education and national values. Why would people choose a line that was obviously not the correct response? Crutchfield (1955) automated the experiment in order to avoid problems of consistency among experimental confederates and obtained equally astounding rates of conformity, about 46 percent among military officers tested. Despite being in leadership where accuracy is of great importance a significant minority yielded to a unanimous majority. In this experiment, where there was no direct contact between participants and confederates, it is difficult to imagine any approval or sanctions arising from participating in the experiment. The results would suggest that we are socialized to behave in conformist ways.

What is startling about these responses is that there was nothing at stake in these experiments for the participants. There were no rewards for going along. How do these high rates of conformity square with the predominant notion of rugged individualism in U.S. society? In the Asch experiment we have a situation where people yield even when their eyes tell them otherwise. If people yield with such minimal pressure, what would happen when significant demands are made, and the pressure is significant?

6. We can resist conformity
At times, of course the majority is right, and we would be right to go along. However, all too often we go along with the social norm because we are mindless, do not understand the issue, or are under great pressure to conform. It behooves us to remember that history is filled with examples of those who resisted conformity even at great cost. Those who refused to go along with the norms of corrupt social systems started the liberation struggles in many oppressed countries. This would be true of the war of independence in the United States from Great Britain, as well as of the struggle for independence in Vietnam from the US, and in Norway from Sweden, and in similar struggles in many other countries.

We should remember that even in the midst of genocide there are those who refuse to go along. At My Lai not all participated in the atrocity. Some simply refused to follow orders, one soldier shot himself in the foot in order to be evacuated away from the massacre, one helicopter pilot seeing what was happening sat down his copter and picked up 15 children and ferried these to safety. Remember in the “War of The World” radio play there were those who did not panic, who sought to behave in rational ways and sought information to disconfirm what they had heard.

We can also resist by adopting an attitude of skepticism that lies at the base of all scientific and social progress. Remember that once the vast majority of people and scientists believed the Earth was flat. It cost a great deal to resist that dogma and social norm, but it was resisted and eventually we moved away from parochialism toward a view of the universe that is still evolving. We can resist by asking questions. We should all remember that conformity affects the very reality of the world (Bless, Strack, & Walther, 2001;Hoffman, Granberg, See, & Loftus, 2001).

7. We want to be liked: normative conformity
Some years ago there were a number of fatalities on the ferries going from Norway to Denmark as young people engaged in a dangerous game of hanging with their finger tips from the ferry railings. Why would anyone engage in such suicidal behavior? We were also told that in Brazil approximately 150 teens died from a similar game surfing the roof of electric trains, and that hundreds more were injured. It raises the obvious question as to why these teens continue to conform to peer pressure under conditions that cause great harm or even death? These behaviors are extreme examples of normative conformity, behaviors carried out for reasons of social acceptance. We often conform to group rules or what we call social norms, by following the lead of others in our effort to find acceptance and respect (Miller & Prentice, 1996).

To be deviant in these extreme conditions is to be rejected by other group members (Kruglanski & Webster, 1991; Levine, 1989; 1999). Rejection by peers can for some have very tragic consequences. In Japan students subjected to group rejection are known to have committed suicide (Jordan, 1996). We are a social species, and we therefore need to be liked. We will often comply with norms even if we disagree with the behavior. What we do in front of others, however, may be different than our private opinions. Research has shown that we will conform in public while maintaining our private opinions (Levine, 1999). The desire for social approval is called normative influence, we want to be accepted and not rejected, the common human experience (Janes & Olson, 2000). At times we just conform outwardly in order to get along. The boss at work may express political opinions with which we disagree, but we pretend to agree in order to keep our jobs or perhaps we see a promotion in the future. We may manifest our agreement in various ways while we think he is a fool for thinking the way he does.

For those who doubt the power of social rejection studies have shown that being deprived of human contact is experienced as very traumatic (Baumeister & Leary, 1995; Curtiss, 1977). Perhaps that is why prisoners kept in isolation consider this the worst form of punishment.
Most people want to be liked by their peers, family, and others. We often seek their approval, and are motivated to conform (Larsen, Martin, Ettinger, & Nelson, 1976). Perhaps much of the behavior we see as aggressive or even genocidal is motivated by a desire for approval and to avoid rejection by significant others. Among all living organisms humans have the longest dependency period, and learn early to distinguish between acceptable and unacceptable behavior. In other words, in a nonverbal way we early on learn the norms of the group. If the group has hostile norms like the Ku Klux Klan in the US, or gangs in the inner cities of Europe, members will display such behavior. There are even some gangs that require the killing of an innocent human being in order to become a member, it is called “making your bones”, and probably originated with gangs that ran various criminal enterprises.
However, if we behave long in a certain way our behavior may eventually change our opinions. As already discussed in chapter 5 cognitive dissonance theory suggests that we need to experience a state of consistency between behaviors and beliefs; i.e., our attitudes, or we will feel uncomfortable. Perhaps the employee after outwardly supporting the opinions of the boss may start a process of reconsidering his initial views. In this process the individual tries to empathize with the boss’s perspective, and develops a new interpretation more in line with the conforming behavior. This post-conformity change in beliefs is supported in research (e.g. Buehler & Griffin, 1994). We have seen that even when there is little risk people will still conform in order to be liked. In the Asch experiment there was little informational conformity involved since it is not an ambiguous task. The choice was obvious, and still many of the participants went along with the unanimous majority (Janes & Olson, 2000; Kruglanski & Webster, 1991; Schachter, 1951).

8. Factors that support conformity
Research has demonstrated that some situations are more likely than others to create conformity. Among these are group size, unanimity of group opinion, and the level of commitment to the group (Cialdini & Trost, 1998). The size of the group can only be considered a minimal effect. Experiments show some group size effect up until the group reaches a size of four. Group size after four has little effect where this has been tested (Asch, 1955).

8.1 Unanimity of group opinion
The initial studies were carried out with unanimous group opinion favoring the wrong choice. As we have seen that produces powerful conformity effects. What would happen if the group did not express unanimous opinions? Of course it takes a great deal of bravery to stand up to friends as well as enemies, to be a minority of one. In the Asch experiments the confederates were unknown and should logically have produced little pressure. However, research shows that if the subject in the Asch paradigm has just one ally who refuses to go along with the majority opinion, the conformity rate dropped to 5 percent. Just one ally weakens the normative influence in the Asch paradigm and participants may start to think “there is obviously one more sane person in the group” (Morris & Miller, 1975).

This result should give us all pause for thought. If just one person can produce resistance to conformity pressures should we not safeguard free speech as being essential to accurate decision making? Should we not do all that is possible to retain a “devils advocate” whose role is to consistently take the opposite on all questions or issues before the group? Only in this way can we protect free thought so essential to any progress whether scientific or cultural. The lone dissenter decreases the confidence of the participants in the majority. As the story goes “perhaps the emperor really does not have any clothes on” despite pretensions. The dissent indicates that there is room for some skepticism, that the issue is not closed but needs further evaluation, and hence encourages less reliance on the correctness of the majority opinion. This will work, of course, primarily when the conforming individuals already have private doubts about the majority opinion, but have been afraid to utter these in public. We can only guess, but governments that do not rely on true consensus probably have more to fear from dissenters, and therefore seek to suppress such dissent as we saw in e.g. in Hitler’s Germany, in Stalin´s Soviet Union, in the Burma of the junta, and everywhere where brutality is the norm in suppressing dissenting opinion.

8.2 Is the group important?
Some groups to which we belong are not important to our lives or happiness. Perhaps the university psychology class is of this type. Sure you want to get along with teachers and fellow students, but in a short time you will be into other things in your life. Perhaps you belong to a group that plays some type of game, and while you enjoy the interaction the group is not crucial to your self-esteem or your worldview. Most people have the experience of membership in groups that are desirable for some reason, but you would not be crushed if you no longer associated with the group or its members.
On the other hand there are groups that are central to our lives and sense of well-being. Such groups often include the family, but may also include groups based on religious or political philosophy. In these groups you find expression for what you consider being the meaning of life, and perhaps prescriptions for how to have a happy life, in some cases eternal life. These groups are obviously of great meaning to the individual, and therefore elicit greater commitment and willingness to sacrifice for the welfare of the group. The bond between the group and its members affects the level of conformity. The stronger the bond the more likely the individual will conform to group opinions and norms.

Certain positive forces keep group commitment at high levels. These include liking other group members, feeling that important goals are being reached, and the positive gains obtained by group membership. These positive forces lead also to higher levels of conformity. There are also negative forces that keep the person involved in the group and they have similar conformity effects. These include having few other alternatives. For example, you are a middle-aged man and have not trained for any work except that which you are now performing. At the same time your investment in the company is very large, perhaps you hope to eventually obtain a generous retirement. These conditions are equally likely to produce more commitment and conformity.

8.3 Do we differ in our need to get along?
People are different. There have always been individuals in any society who had the courage to be different, and thereby embolden others. Some people simply like to be different, to stand out from the crowd in a distinctive way. The willingness to be different is called desire for individuation, and has been demonstrated in a number of studies (Maslach, Stapp, & Santee, 1985; Whitney, Sagrestano, & Maslach, 1994). People who are willing to stand apart from the majority help others to resist conformity pressures by showing that there might be different opinions than those summarized in the group norm. They also serve as a source of allies and confederates for those who want to resist.

8.4 Low self-esteem and conformity
In addition to approval seeking other personality variables may play a role in conformity as well. From our personal experiences we probably know people who seem more conformist than others. People with low self-esteem may not have the personal confidence necessary to resist group pressures. One reason may be that the low self-esteem person fears rejection to a greater extent and is therefore more likely to conform (Asch, 1951). In later research Crutchfield (1955) found support for this contention. In related studies those who perceived themselves as having a need for social approval were also more likely to display normative conformity (Snyder & Ickes, 1985). Personality plays a role, but can be overridden by the more powerful influence of the situation. People may appear inconsistent in conformity primarily because the demands of the situation differ. Behavior is a consequence of both personality and the situation (McGuire, 1968). Of the two the situation tends to be more powerful (Larsen, Coleman, Forbes, & Johnson, 1972).

9. Gender differences
In most societies males and females are socialized in different ways. Socialization is related to the different social roles played by the two genders, although these roles are being redefined in modern society. Still there are both biological as well as social differences between boys and girls. It should therefore not be surprising that social psychologists have shown an interest in gender differences. Traditionally it is thought that females are socialized to value relationships and interdependence more than males. Since social relationships are seen as somewhat more important to females, we might expect a greater desire in them to get along and to conform (Eagly, 1987). Given these sex role differences, conformity behavior is in the expected direction. In the meta-analysis of 145 studies men were less prone to accept influence, but the overall difference was small (Eagly & Carli, 1981). The critical variable for conformity was found in situations that produced direct group pressures. When an audience can directly observe behavior, females conform more. Do women conform because they are more conforming by nature or do they conform because of political correctness? Despite political correctness the core of conformity is responding to group pressure. What one’s private opinion is might not have many consequences for the person or society, what matters is what we do in the social setting. Responding to direct pressure is really the critical variable in conformity, and where that occurs, for example in the Asch type study, females conform at somewhat higher rates (Becker, 1986; Eagly, 1987).
With growing emphasis on women emancipation we might expect the difference to reduce. But will they go away? It is interesting that the genders conform more when the issue is gender related. Thus females conform more on what is commonly considered male issues such as geography or mathematics, whereas males conform more on female issues where women are supposedly the experts like child raising (Sistrunk & McDavid, 1971).

10. The influence of culture
Some cultures prize individuality, yet other cultures put value on the welfare of family and society. Nowadays in most western societies a person lists his given name first and his family name second, particularly in informal social settings. In East Asian countries the reverse is true, people list family name first as the primary identification, then the individual name. Perhaps this is an illustration of the differences between what might be called collectivistic and individualistic cultures. Milgram (1961) replicated an adaptation of the Asch experiment in Norway and France and found significant differences between the countries with the Norwegians conforming more than the French. He explained these differences by concluding that Norwegian society is a highly cohesive, whereas the French were less cohesive and more individualistic.

Many other cross-cultural studies have been completed on normative conformity utilizing the Asch paradigm. Whittaker & Meade (1967) found similar levels of conformity in Lebanon, Hong Kong, and Brazil to that among American respondents, whereas respondents from Bantu tribe in Zimbabwe conformed to a higher degree. It seems that culture matters. The composition of the group is however also important. If the group is largely anonymous as in the Asch experiment, then otherwise more conformist cultures may produce lower levels of conformity (Frager, 1970; Williams & Sogon, 1984). Similar results emphasizing the importance of the nature of the group were also found in Britain and Germany (Abrams, Wetherell, Cochrane, Hogg, & Turner, 1990). Conformity to strangers is less powerful than to a well-established and valued group (Moghaddam, Taylor, & Wright, 1993).

Overall conclusions from a meta-analysis of some 133 studies of varying cultures show that collectivistic cultures produce more conformity than those with more individualistic socialization (Bond & Smith, 1996). Perhaps one reason is that conformity is not seen in the same light or viewed the same way in the two types of cultures. In the western world conformity is a negatively laden term indicating personal weakness. In other cultures, however, sensitivity toward others is valued as part of the culture of courtesy (Smith & Bond, 1999). In general collectivistic cultures value normative conformity as a means of creating social harmony and supportive relationships (Guisinger & Blatt, 1994; Markus, Kitayama, & Heiman, 1996).
Perhaps there are also deeper values related to human survival. In some of the more collectivist cultures people share less space, and social harmony is therefore of greater importance. In others conformity may be related to physical survival. Developing societies that rely on hunting or fishing may value independence more than societies that are agricultural. Hunting and fishing require traits of assertiveness and independence whereas agricultural societies value conformity. In developing societies conformity and cooperation are essential where survival depends on interdependence and close living situations.

In modern Netherlands the lack of space produces opposite effects through the application of a norm of tolerance for differences. Tolerance overcomes the lack of space. In Norway there is lots of space but also a strong influence of traditional values. Obviously the history and development of society makes a difference in the relationship of values to conformity.

11. Transhistorical changes in normative conformity
Today many textbooks indicate that rates of conformity are changing in the United States. They cite studies from 25 to 40 years after the original Asch experiments which show decreasing rates. (Bond & Smith, 1996; Lalancette & Standing, 1990; Nicholson, Cole, & Rocklin, 1985; Perrin & Spencer, 1991). However, these apparent changes may reflect different conformity processes not less conforming. During this time we saw protection of human subjects as a hot issue that likely produced more skepticism and resistance by students participating in psychological experiments. Furthermore, a new type of conformity called “political correctness” replaced the old incentive of dependence on authority figures. Nevertheless, the aforementioned results at least have the merit of calling to attention that changes do occur over time in the history of social psychology.

Often our research is presented as if representing the immutable truth established with transhistorical validity. In fact, Larsen and his co-workers have shown a remarkable correspondence between conformity in the Asch experiment and conformity in society (Larsen, 1974d; Larsen, Triplet, Brant, & Langenberg, 1979; Larsen, 1982; and Larsen, 1990). Initially Asch showed that conformity was high in both society and the laboratory during the 1950s, a time dominated socially by the conformity pressures of McCarthyism. Later during the war on Vietnam students began to question authority, and we saw a counter conformity movement expressed by free speech and anti-war student organizations. During this period of the 1960s we also saw conformity rates decrease in the laboratory. However, in the 1980s there was little left of the ideals that motivated young people in the preceding period. During this period students were primarily concerned about grades and careers. This social apathy corresponded to increases in conformity in the Asch experiment. The Larsen et al. experiments were valuable not only for pointing out the rates of conformity, but also for indicating that experimental behavior is correlated with the happenings in the larger society and reflect to some degree that society. Therefore the social psychologist’s work is never done, we can never assume that our research has validity, at least as far as rates are concerned, except for the generation in which the research was completed.

12. The influence of conformity in our daily life
The importance of research on conformity is established by how the findings translate to real life. One does not have to be an astute observer to see conformity pressures everywhere. Everyone rising for the national anthem is but one of many occasions when pressure to conform is acute. The elaborate rituals of courtesy that we observe in many cultures, including bowing or hand gestures, are also examples of conformity, but so deeply ingrained in the socialization process that few give them any thought. Changing fashions and fads is but another way to show that most people go along with the crowd. In fact one way to show individuation is to not wear the common garb of society. Most people want to be liked and accepted, want to be seen as “cool”, and therefore have a keen interest in what peers are wearing.

In the late 1960s when so many changes were occurring in society, we saw corresponding changes in social garb. We can remember this as a time of movements against the war, but also a time for the liberation of defined minorities, particularly Blacks, and others who were discriminated against, like women. Did these movements make women less interested in fashion? We think the evidence shows the opposite, only now the fashions reflected the new times with women wearing what was formerly thought to be men’s clothing, and in the spirit of the times the hemlines rose to the level of mini skirts.
Young women were sometimes faced with conflicting norms, the norms of society and religious bodies that viewed the length of skirts as a moral issue, and peer groups that encouraged conformity toward the short apparel. This conflict was in the U.S. especially present in college students who attended religious universities. There were two conflicting norms that young women were trying to address at these universities: pressures from the peer group and from the religious body who sponsored the university. How could the issue of hem length be resolved? Do you think by a compromise between the societal norm and the peer group norm? That is exactly what researchers found (Hardy & Larsen, 1971). Women’s skirts at a religious university were shorter than the ideal announced by the university, but longer that the mini skirts then in fashion. It seemed a rational situation which can be applied elsewhere, the individual in the presence of conflicting norms will seek a compromise between the two prescriptions which is not totally satisfactory to meeting either norm, but allows for feelings of belonging to the competing reference groups. How do Muslim women handle conflicting dress codes?

12.1 The changing ideal body images
All who have visited other countries are aware that not all cultures hold the same view of the ideal human form, nor what constitutes ideal female proportions. Many societies consider plumpness as very attractive as it connotes fertility, prosperity and health. In our culture however, extreme thinness has been promoted for a long time as ideal womanhood (Anderson, Crawford, Nadeau, & Lindberg, 1999; Fouts & Burggraf, 1999; Jackson, 1992; Thompson & Heinberg, 1999). Anderson and her colleagues studied varying female ideals across cultures. They thought that the ideal form would depend on the presence or absence of food. In those societies where food was scarce plumpness would be considered attractive and that was exactly what they found. Only in societies similar to the U.S. where food supplies are plentiful are skinny women considered attractive.

At the same time what is considered the ideal female form has also changed within our society. For example Silverstein, Perdue, Peterson & Kelly (1986) examined the photos of models in two prominent women’s magazines, Vogue and Ladies Home Journal from 1901 to 1986. Using new techniques they were able to measure women’s busts and waists, thus creating a ratio between these two measurements. The results showed dramatic changes over time. At the beginning of the 20th century attractive women were voluptuous, but by the 1920s thin and flat chested women were considered most attractive. In the 1940s the social norm for female attractiveness again returned to curvaceous women like Marilyn Monroe. However, since the 1960s extreme thinness has been the norm to the great detriment of women’s mental and physical health (Barber, 1998; Wiseman, Gray, Mosmann, & Ahrens, 1992).

Similar findings have been demonstrated for the appeal of thinness in Japanese culture (Mukai, Kambara, & Sasaki, 1998). There are obviously individual differences in how women respond to these social norms. Those who have high needs for approval are more likely to conform in different arenas (Larsen, Martin, Ettinger & Nelson, 1976). In Japan need for approval also predicted eating disorders as Japanese women responded to the demands of the social norm for thinness.

We all learn what is the ideal form, whether male or female via informational influences from the media, Internet, advertisements in magazines, model shows on television. In response to these demands women have joined health clubs in what is for many is a lifelong quest to shed weight. While we can applaud the health giving effects of exercise we must also be aware that when cultural standards are approaching absurdness they can only be met through efforts that may be very damaging to women’s health. The routine of losing and then regaining weight is very damaging to the person’s self-esteem. There are also direct impacts on physical health (Thompson, 2004; Levine & Smolak, 1996; Cohn & Adler, 1992).

12.2 Eating disorders and normative conformity
It should come as no surprise that women take drastic measures to achieve a more acceptable body image. In recent years we have seen many negative outcomes of thinness as a social norm reflected in anorexia nervosa, and bulimia (Gimlin, 1994; Sands & Wardle, 2003; Ellin, 2000). The norm of thinness is reaching even very young girls who try to stay thin by dieting, self-imposed vomiting and the use of laxatives. The pressure to conform is primarily responsible for bulimia and anorexia. In anorexia the victim often sees herself as heavy even when she has reached a stage of morbid thinness. In bulimia there is often a pattern of binge eating followed by purging through various means. Crandall (1988) found that bulimia was primarily a disease initiated by the women’s desire to conform to the eating patterns of their friends. Again both informational conformity through various media and normative conformity in seeking the approval of peers, play important roles. In the Ellin (2000) study almost one third of 12 and 13-year-old girls were actively trying to shed weight by means of dieting and purging. Society must have built in devastating low self-esteem to encourage such drastic body modification in what are after all children.

12.3 Do men escape self-critical body images?
For men too we see similar unhealthy conformity processes at work. For example, in examining the changes that have occurred in boy’s fantasy toys one can see a pronounced move toward more muscularity. The G.I. Joe, a militarist toy depicting a warrior type male figure has changed from its inception in 1964. Initially G.I. Joe had normal male proportions, but it changed gradually over time to the latest incarnation of absurd muscularity called G.I. Joe extreme (Pope, Olivardia, Gruber, & Borowiecki, 1999). At the same time the weapons associated with the figure have also taken on increased lethal proportions as expressions of aggression and hostility. Little boys are getting early training in militarist socialization.

Have boys and men also come under corresponding pressures to conform to an ideal body image through informational and normative conformity? There is much that points in that direction (Morry & Staska, 2001). In research by Pope, Gruber, Mangweth, Bureau, Jouvent, & Hudson (2000) men were asked in United States, France, and Austria to indicate their preference for an ideal muscular male body. The participants believed that the ideal body was on the average 28 pounds heavier than their own bodies. As part of the liberalizations that occurred in connection with the women’s liberation movement, men also have been objectified as sex objects in female magazines. Over the years a larger proportion of males are shown in a state of undress, with 35 percent of male models being in various states of undress (Pope, Phillips, & Olivardia, 2000). Although men think women prefer more muscular bodies, when asked women prefer more normal male proportions. Clearly men are submitting to the propaganda of informational conformity.

12.4 Normative conformity to promote health?
A major problem in western societies is binge drinking among high school (Netherlands) and college age (U.S.) students. Those who participate often use normative influences to justify their behavior. They engage in binge drinking they contend, because it is common among their peers. In actual fact most students overestimate the amount of drinking among peers, and the true norm is much lower than commonly believed. Since students often misperceive the true frequency for drinking, some universities in the U.S. are using informational and normative conformity to encourage more rational behavior. We know that those who promote drinking use attractive peer groups to encourage consumption in their advertisements. Could the same approach be used to decrease drinking? For example what would happen if universities announced in the student paper, “most university students have four or fewer drinks when they party”. Would that help change the norm toward more responsible drinking? What if appeals about safe sex practices included information that indicated that most of their peers do so or refrain from sex? These approaches have been used at a number of universities (Campo, Brossard, Frazer, Marchell, Lewis, & Talbot, 2003; Perkins, 2004). Normative influence however, is most likely to have effect if the pressure comes from the student’s smaller reference group. Some of these campaigns may also have a downside. For example, heavy drinkers might reduce their binging, but those who never or rarely drink may be influenced to increase their consumption.

12.5 Resisting pressures to conform
People do not always give in to social pressure. Given the right conditions people will act opposite to the demands of conformity. This is called reactance theory. When people feel their freedom of action threatened or their ability to behave as they want, they may react by doing the proscribed behavior (Brehm, 1956). This so-called boomerang effect has been demonstrated in some experiments (Brehm & Brehm, 1981). During prohibition many drank heavily. When parents prohibited short skirts girls found ways to make them shorter. A clear example of reactance is the terrible “two’s”, when a small child first asserts his independence and when the word “no” comes into frequent use. Sometimes parents will elicit the desired behavior by asking for the opposite, “no, you can not have the green beans with your dinner”. If we have an ally as we saw in the Asch experiments we may at times be able to withstand social pressures. Do these strategies work in all situations? We shall take up this theme when we discus the experiments on obedience and situational conformity.

12.6 With a minority we can resist informative and normative influence
The silent majority of the world has been endured in quiet desperation our destructive history. It has always been the strong and principled minority that has produced progress and achievements. In the face of impossible odds, and against the mores, customs, and norms of society, the minority has progressively changed the world. Individuals and minorities have created all the innovations that have produced material and social culture. In the Middle Ages it was against scientific, and especially religious norms, to believe the Earth was anything but flat. The cosmos was viewed from the Earth, and all stars and planets rotated around our little space ship. It took much courage and fidelity to truth to change these views to those that have allowed us to explore the planets and develop modern physical science. The development of secular societies based on reason has likewise been the consequence of great human struggles against superstitions, and those who would enforce dogma on the human family. Indeed the minority cannot only resist, but can change the opinions of the majority over time (DeDreu & DeVries, 2001).

We have already seen in the Asch paradigm that having even one confederate reduces conformity significantly. Later the work of Moscovici (1985) showed how a minority of confederates could change the opinions of the majority in a perceptual, experiment where participants were asked to rate the color of slides. When there were no confederates all the participants rated the blue slides as blue. However, when two confederates consistently rated these same slides as green, about a third of the participants reported at least one green slide, and 8 percent rated all the “blue” slides as green (Moscovici, Lage, Naffrechoux, 1969). The minority, it would appear, had a significant effect on the majority who were the true subjects.

As already mentioned in chapter 6 it matters how opinions are presented. The minority must have the style that represents conviction being both forceful and consistent (Wood, Lundgren, Quellette, Buscame, & Blackstone, 1994). If they display principled opposition they are more likely to be seen as competent as well as honest (Bassili & Provencal, 1988). This is also the process by which a minority eventually turns into a new majority as they convince others of the correctness of their position. Other factors that influence the majority are the logical soundness of minority arguments, and when changing your mind is not of great consequence for the majority (Clark, 2001; Mackie & Hunter, 1999; Trost, Maas, & Kenrick, 1992).
Generally minorities are also more successful in persuasion when there are ties that bind the minority and majority. In other words those who are perceived as in-group minorities will usually have more influence on the majority than those minorities who are seen as belonging to a different category or an unrelated out-group. Hence, a Bulgarian will be more successful in changing the opinions of other Bulgarians as compared to the effectiveness of a person from Turkey or Greece (Volpato, Maass, Mucchi-Fiana, & Vitti, 1990).

Social psychology is debating whether the process of influence is similar for majorities and minorities. The dual-process hypothesis suggests that cognition is very different for both groups. The minority influence leads majority group members to think seriously about the issue, leading to changed attitudes. On the other hand the majority influence is seen as more conformist leading perhaps to changes in behavior, but not in privately held attitudes (Forgas & Williams, 2001). The benefits of minority influence are especially useful on tasks which require creative and novel thinking, where people have to think “out of the box”, where there is a need for many perspectives (Nemeth, Mosier, & Chiles, 1992). There are scholars with a different view. They think that both minority and majority influence can be expressed in attitude change as well as public compliance (David & Turner, 2001) (see also discussion of how to prevent group think in chapter 6). However, the usefulness of minorities should indicate that all social units should treasure opposition and value minorities as a means of correcting errors and challenging “all knowable” majorities. On the other hand majorities typically elicit more conformity as they have the means of enforcing compliance, but that does not necessarily change private opinions. Minorities may influence fewer people, but the change is more significant and lasting (Maass & Clark, 1983).

There are those who would argue that minority influence is primarily of the informational type. Outside the Asch paradigm or similar experiments are people in the majority concerned about minority opinion? However, by providing contrary information in a consistent and courageous way the minority may eventually become the new majority. The silent majority complies to prevailing norms, but may be provoked to reconsider their beliefs by a minority with principle and daring (Moscovici, 1985; Nemeth, 1986; Wood, Lungren, Quelleette, Busceme, & Blackstone, 1994).

13. Compliance: explicit requests to conform
We have seen conformity as the mimicking of the behavior of others, or as a consequence of the pressure of unanimous majorities. We have observed the influence of both informational and normative conformity as operating together in many behaviors. In compliance people are, however, responding to an explicit request from another person with some degree of power. When complying we respond not from desire, feelings, beliefs or attitudes, but because of our relationship to the person making the request. In employment the boss may make a request for you to work overtime. You really have other plans, but since the boss can both reward you and punish you, you would probably go along. There are some cases where people go along with a request for no good reasons as perhaps agreeing is just a part of that person’s personality (Langer, Blank, & Chanowitz, 1978). Through socialization we have learned to go along with any request, even if it is totally mindless. In the above study the confederate of the experimenter asked people to be allowed to go to the front of a waiting line at a photocopy machine because “I have to make copies”. Surprisingly a number of people yield their place in the waiting line for such a mindless reason. Mindless because the people waiting also “just had to make copies”.

13.1 Compliance and power
Often compliance is in response to power. French & Raven (1959) and Raven (1992) outlined six bases of power that included both coercive and rewarding power to which we referred to above. Coercion can range from very severe physical force to milder signs of disapproval that in turn may be backed up with actions in the future. If you refuse to work overtime the boss may respond with something like “those who do not will not have a future with the company”. You might rightly think that you will be fired at the pleasure of the company. If you do work overtime, in particular if you do so without overtime pay (the standard in the western world is now 1 1/2 times normal pay for working over 7.6 hours in a 38 hour week), you will be seen as a “company man” who identifies with the company and its goals. Privately you may curse the boss, but publicly you go along because of his power.
French & Raven also referred to other forms of power. The boss may also be seen to have legitimate power, i.e., his position gives him the right to make the request. The police also have legitimate power. Society that has given the police its power, generally accepts their right to enforce the laws of the land.

In case there might be confusion about the legitimacy of the person making the request we also dress these authorities in sanctioned uniforms, like uniforms for police and armed forces, the white coat of a physician, and the black robes of a judge. Those who dress appropriately are more likely to obtain compliance than those who do not (Sedikides & Jackson, 1990). Legitimate power is related to the social consensus we have regarding social roles like the boss, police officer, teacher, and parent. We accept that they have a legitimate right to make requests and ask for compliance.
We are also more likely to comply if the person making the request is perceived as having some form of expertise. We comply with teachers because they should know more than we do. We defer to scientists who have spent many years in hard labor trying to understand their field of study. We are also likely to follow the advise of doctors as their expertise is critical to our health. Sometimes having information may be persuasive. Today we are in a heat wave of more than 34 degrees Celsius. We can give this information to a friend who plans to visit, and he may chose to delay his visit, or alternatively pack very light summer clothes. Information can be a source of social influence. Furthermore, we are also more likely to listen to those with whom we identify (Orina, Wood, & Simpson, 2001). If we like the teacher and want to develop a closer relationship we are more likely to listen to lectures and instructions (Richmond & McCroskey, 1992). If we like our spouse and want to maintain a good relationship we may be more likely to agree with his or her political and religious beliefs.

Finally, to some degree compliance is affected by the mood of the individual (Forgas, 2001). In general people are more likely to comply when they are happy. You can imagine that yourself. If you are very happy, perhaps in love, you are more likely to agree to any request. You may be so happy you will agree to even absurd demands like carrying your spouse on your back if requested. Think of times when you were happy, did those times lead to more willingness to go along with requests from family or friends? For those who want to influence another person it would help to get the targeted person in a good mood. Children and spouses practice that by waiting with requests until the “right time”. We examine the mood of the boss, “is this the right time to ask for a raise, is he/she in the right mood”?

13.2 Getting compliance through manipulation
Sales people have learned that certain techniques are more likely to result in sales, charity workers have learned the same techniques in order to obtain donations. One study by Freedman & Fraser (1966) demonstrated the “foot in the door” technique that we also discussed briefly in chapter 5. In this approach one increases compliance by making an initial small request, and once compliance is secured, we come back with a larger request. If we agree to do something not terribly challenging, we are more likely to comply with the larger request that follows. If you agree to sign a petition in favor of some political action you may be more likely to also make a monetary contribution. Some think that in responding to the initial request we are somehow changing our self-image (Burger, 1999). For example, in signing the petition we have begun to perceive ourselves to be somewhat politically active. Others believe that we have in western cultures a strong motivation to appear consistent (Guadagno, Asher, Demaine, & Cialdini, 2001). If we sign the petition it would be consistent to follow up with other political activities. Finally some researchers (Gorassini & Olson, 1995) believe that we change our perception of the situation that frames the request. If we sign the petition we have already made one significant step. To volunteer for other activities are not different from this request, it belongs to the same situation.

The “door in the face” manipulation involves asking for a very large effort, then when refused following that with a request that seems reasonable given the initial outrageous demand. One of us has recently been involved in the purchase of a vehicle. The car was marked with the manufactures “suggested retail price”, which in car sales in the US is meaningless. Only the naive or mentally challenged would pay this amount for a car. Car dealers then put a “sales price” on the car to indicate to you what a good deal you are getting, and you may even think it is reasonable. That price is of course from where the real bargaining proceeds. If you know the invoice price you can make a bid closer to the cost to the dealer, and if he still makes a profit he may agree.

Perhaps you are asked to volunteer for a minor service assignment in your community, which because it seems minor you agree to do. Later, you learn that much more time is required, but since you have agreed you continue to serve. Finally, sales people are often successful in making sales by presenting the product in the best possible light, and assuring the customer of what a great deal it is. When the customer hesitates the sales person will say “and that is not all” (Burger, 1986), and offers additional products at no additional cost. For example, the car sales person may say “if you buy the car we will in addition also pay the gas you consume the first year”. The above manipulations are all ways of altering the perceptions of people and thereby increase compliance.

When oil was discovered at the bottom of the North Sea in the late sixties the public debate was framed by Norwegian spin doctors as a choice between two alternatives: To take out huge quantities of oil per year or much fewer barrels. Framing the question as a choice between the two alternatives silenced a possible alternative debate: To take out no oil at all.

13.3 Convincing people to comply with morally bankrupt behavior
Too many times in human history the demand for compliance has not been the innocuous demands of parents, teachers or sales people, but demands which resulted in genocide and evil. Few people would be prepared to commit evil upon demand, but history shows that the ground can be prepared. At times the ground is so well prepared that entire nations may follow the demands for compliance to the total destruction of people and nations. We can observe that with the Nazi regime in the 1930s and 1940s. They organized a special propaganda office led by Goebbels, a close and slavish follower of Hitler, to prepare the German people for the coming catastrophe. Hitler was of course aware of the power of propaganda as discussed in his book Mein Kampf (My Struggle). In his Nazi bible Hitler showed his disregard for truth and fairness, the objective of propaganda was always to serve the Nazi cause and the decisions of its leadership. The Nazi’s along with other totalitarian regimes were more interested in shaping perceptions, than in education. The objective is to manipulate behavior in the desired direction of the propaganda (Jowett & O’Donnell, 1999).

In propaganda the Nazi’s excelled in the manipulation of grievances and emotions (Zeman, 1995). Since they controlled all means of communication they had what really was a “captured audience”, who had few or no alternative sources of information. If you repeat something often enough people may eventually come to believe even the absurd. The Nazi propaganda machine advocated constantly two political ideas. One was the idea that there was not sufficient space within Germany proper for the Germans. As a great people they had a right to more space they were told, even if it inconveniently belonged to others. We can see similar ideas in Zionism in its attitudes toward the land of the Palestinians. The second idea of Nazi propaganda was racial purity, the great phobia that associating with, and especially marrying foreigners would dilute the bloodlines of the master race. The first idea led to World War II with an estimated 50 million dead. The second idea led to the holocaust in which tens of millions of Soviet war prisoners, those of other nationalities, and those deemed undesirable like Jews, communists, homosexuals and Gypsies, were physically destroyed.

That a people needed more space was not a new idea to Germany, nor were the ideas that led to the holocaust. They had a cultural foundation of perhaps centuries and were accepted by many Germans even before the Nazi’s came to power. Propaganda is more likely to persuade when there is such a base of preexisting beliefs. Eventually all enemies of the state, defined as both ideologically and racially misfits, were described as nothing more than pests which ought to be destroyed (Staub, 1989).

Of course what the Nazi’s did in propaganda is essentially no different than the propaganda of other nations in wartime. During World War II the U.S. propaganda against the Japanese contained similar dehumanizing descriptions as we saw in Nazi propaganda. During the war on Vietnam the US media described the Vietnamese in similar unflattering terms among which the mildest was calling the liberation organizations “terrorists”. All governments prefer little or no opposition to their cherished policies. The one difference is that when allowed freedom of expression not all media goes along with the official lines. In some societies there are limited opportunities for people, if educated, to read the truth between the lines.

13.4 How could people go along with evil: the studies on obedience
In the aftermath of World War II many social psychologists pondered over the collective holocaust that cost almost 50 million lives. How could people go along with that, why had there not been more resistance? In remembering genocidal obedience we wish to pay high tribute to those who sacrificed all in resisting the evil of their day. One line of thought was that it was exceptionally sadistic people who committed these cruel acts. Others thought that all people could potentially participate in similar crimes given the powerful forces that induced obedience.

Part of the reason for accepting genocidal behavior may be found in our socialization. Most children are told to obey their teachers and others who are recognized to have legitimate authority. Much of obedience in society is internalized, and we don’t give these behaviors much thought (Blass, 2000), we stop at red lights automatically for example. However, people likewise socialized to obey orders to hurt or even kill others? Were the participants in the genocides just brutal thugs who enjoyed hurting others? Or, is it possible (a more frightening thought) that they are just ordinary people who found themselves in situations that appeared legitimate, and which can, sadly enough be seen in any war?

Arendt (1965) was an observer at the trial of Adolf Eichman in Jerusalem. Eichman was the person directly responsible for the efficient transportation of the Jews and the killing machine that murdered millions of people. He was not an extraordinary person, but gave in every way the appearance of a normal and ordinary citizen (Miller, 1995). When he stood on the gallows he said “I did it for my country and flag”, in his mind he evidently still believed he had just done his duty and obeyed legal commands. Of course there are rules of war that essentially tells the soldier that he cannot use commands as an excuse to commit genocide, but finding themselves in a situation of war most people do not stand up against their superiors.

Is evil that is as great as genocide committed by sadists or by ordinary citizens following the instructions of leaders and government? This was the question that greatly interested Stanley Milgram (1963, 1974, 1976). Milgram having worked with Asch wondered whether people would conform at any price. After all the conformity expressed in the Asch experiment was rather innocuous, nobody was actually hurt. What would happen if an individual found himself in an experiment where a real conflict existed between personal norms of not hurting others, and demands from the experimenter to do just that? How would an ordinary person resolve that conflict? Would they hurt others in obeying the commands of the experimenter, or would they refuse to participate?
In his experiments the Milgram experimenter solicited people to participate in a teacher-learner experiment. The participant was told that the experiment investigated the effect of punishment on learning by utilizing a shock apparatus. Each time the learner made an error he was to be shocked with ever increasing levels of shock. In fact the teacher in the experiment was the true participant and the learner was a confederate of the experimenter. The real purpose of the experiment was to investigate people’s willingness to administer potentially dangerous shocks to an innocent victim. Although strapped into an electric chair, and responding with varying degrees of protest and hurt, the confederate did not actually receive any shock. He was trained to respond with varying degrees of protest to the constantly increasing levels of shock administered by the actual participant. The real experiment was to see, given the situation as presented, if the actual participant would continue to obey the experimenter. Would the real participant continue to shock at ever increasing levels and against the protests of the “learner”?

The shock apparatus varied from 15 to 450 volts, which was verbally described as ranging from “Slight shock” to “Danger severe shock”. In order to gain an appreciation of the pain administered, the “teacher” was given a small shock of 45 volts. Although at the lower end of the scale, this shock was still painful, and was meant to provide a frame of understanding and empathy for the “learner” as the experiment continued. The participant then watched what he thought was another participant being strapped into the electrical chair and the experiment began. The confederate began to make mistakes and each time he was to be shocked with 15 volts increments. The “learner” began to react with a painful cry at 75 volts, and with increasing protests thereafter. At 270 volts the protests of the “learner” became screams of agony. At 300 volts he refused to answer, was he still conscious? The experimenter had a set of prepared responses to all hesitation by the “learner”. They ranged from “please continue “ to “you have no choice, you must go on”. The protests reached a level where the “learner screamed “let me out of here…I have had enough. I won’t be in the experiment anymore” (Milgram, 1974, p. 56). When the participant hesitated he was just told “you must continue” or ”although the shocks are extremely painful, they do not cause permanent tissue damage”.
With direct reference to how dangerous the experiment is (450 volts, “danger: severe shock”), how many do you think would continue to shock at the highest levels? When a sample of psychology majors, psychiatrists, and other adults were asked they estimated that only 1 percent would continue to 450 volts. The psychiatrist sub sample estimated that only one in a thousand would shock to the highest level. In fact the average shock administered was 360 volts. A total of 62.5 percent continued to shock at the maximum 450 volts, and 80 percent continued even when the “learner” cried out that he had a heart condition and asked to be let out of the experiment.

How can we understand these results? The obedience was not due to sadism or personal evil since the demands of the experimenter caused great anxiety and discomfort to the participants. Rather, as Milgram explained his results, it appears that the average person will obey the command of the experimenter even when this may cause harm or death. Could the participant have refused? Obviously yes, all he had to do was saying, “I am not participating” and to withdraw from the experiment. It is hard to conceive that the experimenter had any special powers to enforce these commands. Perhaps there were conformity processes at work?

It seems difficult for the average person not to obey in the presence of an authority figure (Blass, 2000, 2003; Hamilton, Sanders, & McKearney, 1995; Miller, 1986). The situation in the Milgram studies was about the effect of obedience on otherwise normal people. The situation contained powerful influences, both normative and informational. The participant wanted to be liked by the authority figure, or at least not disappoint him. Being liked under conditions of genocide also brought approval, perhaps even promotions and medals. There were also informational pressures. The situation was very ambiguous. In the experiment there was, on the one hand a believable and apparently legitimate experiment with specific demands. On the other hand, there are also norms in society that we should not hurt others. What to do? In such a conflicting situation we look to others, the experimenter, for guidance, and he was quite unperturbed. He responded to the participants anxiety by saying, “you must continue to shock the learner, and yes it must be at ever increasing levels”. In the face of specific commands, but also of conformity pressures, the large majority followed orders (Krakow & Blass, 1995; Miller, Collins, & Brief, 1995).

Varying the conditions of the experiment Milgram observed decreases and increases in the level of obedience. Situations that made the individual conscious of his responsibility, which emphasized the sufferings of the victim, or which brought the victim in close proximity, all reduced obedience. At the same time increasing the physical distance between “teacher” and “learner” increased the levels of obedience, and made the teacher more willing to shock at higher levels.

13.5 Obedience or conformity to situational demands: The Larsen experiments
The results of Milgram’s studies showed that nearly all obeyed the commands of the experimenter. It seems most of us are socialized to respond to teachers and other authority figures in a similar way. Eichman was, for example, by and large a very willing and otherwise an ordinary human being. Does that mean that people just get caught up in situations with a variety of conformity pressures? Could this be investigated using a paradigm similar to that of Milgram? Milgram (1974) stated that he was certain there were personality factors underlying the willingness to shock an innocent victim, but he had not found them. Snyder & Ickes (1985) suggested that those in need of social approval were more likely to conform. If the situation was powerful enough we might then see compliance to the situation, and orders would not be necessary to obtain willingness to participate and continue.

Larsen and his collaborators (Larsen, Coleman, Forbes & Johnson, 1972) investigated these issues in the early 1970s. They carried out a series of experiments to examine the relative importance of the situation versus the personality of the participant in a Milgram type experiment. However, rather than ordering the teacher to continue the experiment they allowed the situation to create demands on the participant. Therefore we can say that they studied situational conformity rather than the obedience paradigm of Milgram. The results that followed were an even more devastating statement of the ordinary person’s lack of independence. As we shall see the participants in the Larsen et al. experiments did not require commands to shock an innocent victim. Rather the apparent pressure of the situation was sufficient in producing results very similar to those discovered by Milgram. To further reduce the pressure, the participant in Larsen et al. could choose any level of shock as they could for example go back to lower levels if they felt that that might be more useful.

Prior to the experiment the participants completed five measures of aggression and hostility in the guise of another study, and with a time delay to allow it to become an independent testing in the minds of the participants. Subsequently these personality measures were used as predictors of behavior in the experiment. The results showed no relationship whatsoever between personality traits and laboratory aggression. This finding lends further support to the contention that it is the situation that is exerting influence and not personality. Alternatively, it indicates that the behavior in the experiment had little to do with aggression, and more to do with conformity.

Four other conditions were explored to examine varying social learning and conformity situations. If personality is less a factor would the social learning that would occur by watching another person shock an innocent victim, be sufficient to produce higher levels of shock as compared to a control condition? The participants arrived at the laboratory and were told, “we are a little behind in the experiment. To save time explaining the apparatus you can come in and watch the current teacher operate the equipment.” The participant was then shown a confederate of the experimenter who was operating the apparatus at very high levels of shock whenever the “learner” made a mistake. Would the mere fact that someone else models this behavior be sufficient to encourage the actual participant to also shock at high levels?

Another condition was called the “high model” condition. In that condition the subject had the experiment explained in front of the apparatus and was then told to proceed as in the control condition. The apparatus was left with the dial at 350 volts leading to the possible interpretation that the last participant was shocking at these high levels.

Finally in the conformity condition we asked the participant to make joint decisions about what level of shock to deliver with two confederates of the experimenter. Of course unknown to the participant these confederates were instructed to shock at increasing levels in response to each “learner” error. The actual participant was manipulated to sit in the center and was the one to deliver the actual shock. Would the mere fact that two other confederates increased shock levels induce the actual subject to follow suit?
In the control conditions the experiment was only explained as a teacher- learner experiment, and the participant was left to his own devices as to how to proceed, whether at low levels or high levels of shock. He was not told to go either up or down in shock levels, it was entirely his choice, and there was no pressure from the experimenter as he left the room.

13.6 Situational conformity and normative pressures
As can be seen the above situations contained relatively mild pressures, and in no case did we have to encourage compliance. The experimental conditions yielded significantly higher levels of shock as compared to the control conditions. These findings lend support to the social learning underpinnings of the experiment. Despite these mild pressures the participants delivered shock levels at increasingly high levels, even levels that might injure the participant or otherwise be dangerous to his health. The participants could have stopped the experiment at any time. Unlike Milgram the researchers did not demand that the experiment continue. None of the participants refused to continue once the experiment was started.

To repeat, we think these results contain a more devastating statement about the ease by which we can manipulate cruel behavior in the ordinary person. In the Larsen et al. experiments there were no requirements or need to command and still the participants went along. That fact is also observed by the willing participation of ordinary people in many of the real world’s genocides. Most participants in these grisly events do not require the commands of others, just the modeling of “legitimate authority” is sufficient. Out of the 213 participants in the initial study only 3 refused to participate after which the experiment was explained and they were thanked.

The results showed that all three experimental conditions created higher levels of shock as compared to the control conditions. The average level for control was 157; for the model it was 172; for the high model (where the apparatus was left at 350 volts) the average shock level was 237; and for conformity 293. Overall the experiment demonstrated similar results compared to the Milgram experiment, but without instructions to go ever higher in levels administered or using compelling commands to continue. Again, the results show how easy it is to manipulate cruel behaviors from otherwise ordinary participants.

In other experiments participants were shown to be willing to shock even a small dog. After being introduced to the small dog strapped into the electrical chair the experiment was explained as one on learning, in this case learning by the dog to discriminate in paired comparisons trials. If real shocks would have been administered the dog would not only have died, but would have been tortured in the process at the shock levels administered (Larsen, 1974a). Another study demonstrated the willingness to shock a member of a racial minority (Larsen, 1974b). These experiments lend further support to the implicit pressure that the situation exerted on the participant.

Were these pressures normative? Did the participants comply for reasons having to do with a desire for approval? Another experiment was conducted (Larsen, Martin, Ettinger, & Nelson, 1976) which demonstrated that those high in approval seeking motivation shocked at significantly higher levels when compared to those with lower needs for approval. It is less likely that informational conformity played a role as the experiment was completed in solitary conditions with only the initial explanations used in the control condition of the previous studies. These studies argue for the powerful role of situational pressures expressed through both normative and informational conformity. In the model conditions the participant looked to those modeling the behavior, or for clues in the experiment. In the control and approval seeking conditions it was primarily normative pressures of pleasing the experimenter that played a role, as there was no direct or indirect informational pressures or models.

13.7 Why do we obey or conform?
There are obviously normative pressures in the experiments within the obedience paradigm of Milgram, or as in the situational conformity studies of the Larsen et al. When people are in an apparent position of authority like the experimenter, it is difficult for most people to decline participation (Blass, 2003; Meeus & Raaijmakers, 1995). When in addition there are peer pressures as well, as we saw in the Larsen et al. experiment, participants in the study shocked at higher levels. The normative pressures are rooted in the desire to be a good participant and to please the experimenter. There are also informational pressures at work. The experimental situation is ambiguous, and the participants needed information about how to behave. If the “learner” cries out in pain, what is the appropriate response? The participants looked to the experimenter for this information, he was after all the expert.
There were also other reasons why the participants continued. The step-by-step increase in shock levels made the process very seductive. After all if you shock a person at 15 volts, why not 30 volts and if you are at 350 volts why not 355 volts? This gradual increase was seductive to most participants who could not clearly discern where the line was located between conformity to the experiment and harm to the “learner”. Once the participant had justified a level of shock, it provided the justification to go to the next level. If a participant wanted to break off participation he did it against large normative pressures to continue (Darley, 1992; Gilbert, 1981; Modigliani & Rochat, 1995).

In Nazi Germany we saw a similar procedure. Laws were gradually changed allowing for discrimination and groups were selectively persecuted. First the Nazi’s went after the communists, then other groups followed. Having not objected to the initial persecutions the German citizens found no easy way to resist what followed. Fascists use similar step- wise procedures to train those who torture political prisoners. Initially they were ordered to deliver blows in the course of causal contact with the prisoners. This would be followed by watching torture committed by others (social learning). Next they participated in group sessions with fellow torturers that included floggings or other forms of collective torture. Only after all these steps was the candidate considered ready to be in charge of his own torture session (Haritos-Fatouros, 1988; Staub, 1989).
In the experiment most participants found themselves between opposing demands.

Milgram found that when empathy was created for the “learner”, participants decreased the levels of shock administered (Blass, 2003). If the experimenter “tuned” in the “learner”, for example by having the participant sitting next to the “learner”, or having him force the arm of the “learner” to receive the shock, then obedience decreased. So by creating “proximity”, empathy for the suffering of the victim increased. Is this not what makes modern warfare so cruel and lethal? Modern armies kill their enemies by missiles, smart bombs, and even drones that unleash missiles in another part of the world. During the American war on Vietnam millions perished from high altitude bombing by B 52’s where the perpetrators never saw the carnage on the ground. A former pilot explained his mission as follows. They would leave from a base in a nearby country. After a few hours of flying time they were over the target. They had an oven on board and would cook a pie, dump the bombs at the assigned target, and then return to base. Never did they have to confront the reality of the death and destruction unleased on the ground. Thus increasing emotional distance decreases empathy with suffering and makes genocidal behavior more common and likely.

13.8 What would you have done in these experiments?
The high levels of collaboration in these experiments were not anticipated by anyone. Although we saw these experiments as the laboratory equivalent of genocidal behavior, the experimental situations did not seem compelling. It should not have been difficult to resist and refuse to participate. This is what most people think whenever they are presented with the results. Having asked many we would inevitably get a “no” response when we asked “would you participate”? From all walks of life people who have never been in these experiments would claim that they would not have behaved in the way these participants did. Is that really so?

The real value of these experiments is that they lend support to the normalist position on genocide. Given compelling situations most people would in fact follow the directives of evil from apparently legitimate authority and commit crimes of varying dimensions. Given the right circumstances the capacity for destructive conformity lies in all of us. These participants were not exceptional in any way, nor were they who committed all the horrors of world history. Most were very ordinary citizens.

The actions of reserve police battalion 101 in the massacre in occupied Poland in 1944, illustrates the point (Browning, 1992). These reserve police officers were all peaceful citizens of Hamburg who volunteered to serve in this unit, probably to avoid war. So when they were asked to round up Jews from a little Polish village Jozefow and told they were to shoot them, it must have come as a shock. However, their resistance was feeble. Some tried to leave the area, some stood in the back of the execution squads, or tried to miss when they fired. However, none stood up and said they would not obey the criminal command. There was no easy way to disobey.

In a similar way the Milgram and the Larsen et al. participants found themselves in a compelling situation and complied with orders or conformed to the situation. People who have good intentions, but lack the moral fiber to resist an evil situation pave the road to hell? Milgram offered the opinion that, were death camps to be created in United States similar to what we saw in Nazi Germany, sufficient personnel to man these camps could be found in any mid sized American city (Blass, 2003; 2004).

It is important to realize that these experiments were not about aggression. According to Milgram even Eichman was sickened by what took place in the concentration camps, but he did not have to face it on a daily basis. Instead he was a bureaucrat who gave orders that allowed the death dealing machinery to perform efficiently to the highest German standards (Milgram, 1976). Since the ground had been prepared for a long time, generations really, it was easy for participants to feel that they was doing the right thing, they were after all only following orders.

Like Eichman, the participants in the aforementioned experiments felt released from any feelings of responsibility. The experimenter was an apparent legitimate authority that took responsibility for all that happened. The experimenter provided cover for the participant as legitimate authorities do in genocides. Whenever we see genocide in the world it is always supported by an ideology and authority that legitimizes the behavior (Zajonc, 2002). Cruel behaviors are transformed into acceptable, even laudable actions that deserve praise and medals, and not condemnation.

The behavior in these experiments also shows that people will often act contrary to their moral values when the situation provides sufficient pressure. Although torn between the desires not to harm the “learner”, the pressure of command or conformity overcame any hesitation. Although compliance was explicitly commanded in the Milgram experiments, it is important to remember that that was not the case in the Larsen et al. studies. Yet in both cases participants were able to rationalize their behaviors and comply with the demands made. Again it was the ordinary person in Nazi Germany that made evil possible. German civil servants cooperated willingly with the holocaust by doing the paper work necessary. They did not directly kill anyone, but they did the work necessary for the machinery of death to work (Silver & Geller, 1978).

13.9 Underestimating the power of the situation: the fundamental attribution error
Typically, as noted above, people told about these experiments have negative views of the participants, and view the behavior as some type of moral failing. In our individualistic society it is common to overestimate the power of the individual dispositions and underestimate the influence of the situation. The aforementioned experiments, especially those that emphasize situational conformity show again that the power of the situation should not be underestimated. We must be on guard for the fundamental attribution error if we want to understand the social processes that produce both good and evil in society (Bierbrauer, 1979). While most people are still inclined to believe in the responsibility of the individual, social psychologists show repeatedly the power of the situation will overcome any personal inhibitions. Even the commanders of the concentration camps were not outwardly different from ordinary people. They would relax after a hard day’s work of killing thousands by listening to Beethoven or Schubert, and carried out their deathly work without any apparent personal hostility (Milgram, 1974).

14. Do cultures differ in conformity?
It follows from the fundamental attribution error that cultures vary in their expression of conformity. Although conformity and obedience may be found in most societies, they may vary in frequency (Bond, 1988). Children in collectivist cultures describe themselves as being more compliant and less likely to defy adult expectations compared to children in western societies (Garbarino & Brofenbrenner, 1976). However, as we have seen participants in the Milgram-Larsen experiments came from individualistic societies and yet complied and obeyed at high levels. Perhaps there is something even more basic than culture: human nature and dependency. The need for social approval is universal and seems to override any cultural differences. Otherwise compliance to evil demands and commands is universal, and can, given the right conditions, overcome any good or generous impulse of the individual.

15. Ethics and political correctness: the search for the truth of the human condition
As mentioned in chapter 1 the above studies by Milgram caused a political storm in psychology that had many consequences. A psychologist (Baumrind, 1964) unleashed a barrage of criticisms of Milgram that included the notion that the experiments produced potential psychological harm through psychological stress and subsequent lower self-esteem. She found the deception used in these studies to be unethical, and the debriefing that followed the experiment to be inadequate. Milgram (1964) however strongly defended his work. He noted that no harm came to the subjects, and that the participants were all given a satisfactory explanation at the end of their participation, and expressed positive feelings about participating.

Some think today that psychology has weathered the political storm that ensued, and has learned from this critique (Miller, 1986). However, one of the consequences has been the establishment of strict guidelines for the protection of human subjects in psychological experiments. These guidelines have now been interpreted to the point of absurdity on university campuses that fear loss of funding if they do not comply. The result is mindless preoccupation over studies that have absolutely no effect on participants, such as responding anonymously to simple paper and pencil surveys. Not only has a whole new bureaucracy been created, but also studies have to be approved at multiple levels including campus wide committees that have no expertise in the field being investigated. It used to be that in social psychology we used deception to get at the truth, now we use informed consent (tell the subjects all about the study), and encourage dishonest behavior. If the participants in the Milgram and Larsen studies had been told that we were really investigating the potential of the normal average person’s willingness to shock innocent victims would we have obtained the same results? Baumrind’s victory diverted psychology from its principal task of describing the human condition, even the unpleasant parts of what it means to be human.

In other words there is now a new conformity in social psychology that is also represented in other parts of society. The conformity can be called “political correctness” as the behavior generated is primarily surface compliance with government rules and regulations with little other meaning. Milgram, however, was right in his contention that no harm was done. A year after his initial research a psychiatrist interviewed the participants and found no psychological harm. There is all reason to argue for similar consequences in the Larsen et al. studies. The researchers obeyed the ethics of that time in providing total debriefing after the experiment was completed, and were of course available for any follow up discussions. Without any exception the participants left satisfied after these explanations.

Further it could be argued that these studies provided the participants with a social inoculation effect. Just like inoculating against physical disease, we think that these experiments inoculated the participants against mindless obedience and compliance. The Milgram studies today are discussed by students in social science everywhere, and are part of the history of our science. Many thousands of students have learned of the ease by which they can be manipulated or are willing to obey commands to hurt potential victims. One of the important outcomes is therefore found in the determination of these direct or vicarious participants in not allowing themselves to be found in similar circumstances. We have no way to know, but might that have had a restraining effect on some battlefield of the numerous and continuous wars of the United States and Europe? We can believe that they have added to well-justified skepticism of authority, of orders and of situations demanding compliance with unethical behavior. In that regard one must conclude that the benefits far outweighed any imagined harm to participants. The outcome, however, changed the history of social psychology in a permanent way, and will make it more difficult to study social behavior in countries where political correctness is the norm of the day.

Summary
This chapter discussed the important roles of social influence. Social psychologists recognize three forms producing changes in behavior. Conformity is behavior resulting from the pressure of others. Students engage in binge drinking because this is behavior favored by their peers. Compliance is where people respond to specific requests or demands. Typically compliance involves people in unequal power relationships, where the more powerful have means to encourage or enforce compliance. Obedience is where the individual yields to influence because the person with power commands performance of certain behaviors. Obedience is basic to all the genocides of the world, along with the apparent legitimacy of the authority that issues the order.

Although we think of conformity in pejorative terms as manifestation of mindless behaviors, going along with others may also be wise. In many cultures it is essential for social harmony and the effective functioning of society. In history we have seen societies liberate themselves through conformity to the norms of nonviolence as in the case of India, and also in the case of the civil rights movement of Black people in the United States.
Some conformity is so fundamental that we are unaware of its presence. The ideomotor effect of James refers to the unconscious mimicking of others. Various studies show that mimicry is experienced as flattering, and perhaps became part of the human repertory because it served to advance the individual.

The classical studies were discussed because they have an effect on thinking in social psychology even today, and changed the history of our discipline. Sherif in 1936 studied how group norms evolved in the auto kinetic situation where participants stare at a stationary light in a dark room and experience the illusion of movement. Individually they experienced varying lengths of movement, but when making estimates in groups pretty soon a group norm emerged to which all members eventually agreed. The auto kinetic effect was demonstrated in a situation of ambiguity. Informational conformity occurs when people are in uncertain situations where they have to look to others to decide the appropriate course of action. Research has shown that informational conformity may lead to errors in identifying criminal suspects, which is why such identification must occur in private and without any clues or pressures from the situation or law enforcement.

Mass hysteria is a consequence of informational conformity. In times of crisis and war the need for information is high, and as we have seen it can produce hysteria of a scale that includes millions of people. Historical examples of mass hysteria include the invasion from Mars scare, and persecution of those with minority opinions during the times of McCarthyism. In other cases we see that informational conformity also plays a role in mass psychogenic illness. People may become ill, feel the same symptoms, be taken to hospitals, but without any physical cause. Ignorance can produce informational conformity. McCarthyism dominated the political and cultural life of the US for decades, and those who did not conform faced severe sanctions including loss of jobs and prison.

Sherif’s study was carried out in an ambiguous experimental situation. Asch, a former student of Sherif, wanted to observe if conformity would also occur in a situation where there was no ambiguity. In his study of perception there was no doubt about the correct response, yet he found astonishing high levels of conformity, where 75 percent of the participants conformed some of the time, and 37 percent on all the critical trials. Since the conformity did not derive from the need for information, the only factor left was the desire to please others, the experimenter and fellow group members. Normative conformity occurs when we change our beliefs, perceptions, and views in order to be liked, and to avoid disapproval or punishment.

We can resist these influences. Even in crisis or under conditions of genocide there are those who resist and refuse to comply. At the base of all dissent is a healthy attitude of skepticism. Think where the world would be today if there had not been among us those who refused to go along with scientific dogma like the Earth is flat. Fundamental to all social progress is this attitude of skepticism.

It is however, a common human desire to be liked. Rejection is experienced as extremely painful feelings, and may even cause self-destructive-behavior. That is why solitary imprisonment is the worst form of social rejection. One reason we need social contact is perhaps the very long human dependency period, longer than for any other living organism. We will go to great lengths to be accepted by groups of people we value.
Among the major factors supporting normative conformity are group size, the unanimity of group opinion, and the level of commitment to the reference group. The research on unanimity, however, shows that people find it easier to resist if they have even just one ally. These findings suggest that we should always include a “devil’s advocate” to argue the opposite point of view in all organizations in order to avoid the errors that derive from informational conformity. Not all groups are of equal importance; those groups that are central to a person’s life, family, and those political and religious organizations that are central to individual values exert the greatest conformity effects. When a person is strongly bonded to such organizations he is more likely to conform.

Resistance is also more likely if people observe models of individuation, people who have a desire to be different and stand alone, apart from the group. Where culture does not permit individuation we would observe more normative conformity.
More conformity may also be a consequence of personality. Those who have low self-esteem may lack the confidence to resist pressures. The idea goes along with the need for acceptance as essential for normative conformity. Some effects have also been found for gender, with females being socialized to nurture relationships and to be slightly more conformist. Female conformity is especially higher in situations of direct observance by others. These situations that exert group pressure, get pretty close to what is the definition of conformity.

Culture may also play a role. Collectivist cultures may exert more pressure to conform when compared to cultures that value individuality. Perhaps these higher levels of perceived conformity are due to our misunderstanding of the dynamics in collectivist cultures. In these societies conformity may be more in the nature of courtesy and respect, and valued for reasons of social harmony. In these societies population density requires an emphasis on courtesy and conformity.

Much of social psychology is a-historical. Our research is reported as if it has historical validity for all time. Yet, recent investigators have reported decreasing rates of conformity using the Asch paradigm. This chapter raises the question what decreasing rates in Asch conformity experiments means in terms of conformity for the rest of society. In recent years the conformity experiments have been discussed widely and the decrease in conformity may simply reflect more information. Also societal norms have changed, and we now see more conformity from norms of political correctness. These norms derived from the social movements of the 60’s provide surface compliance as they frequently come with the power of enforcement and sanctions by government. There is also strong evidence from the Larsen et al. studies that conformity in the Asch paradigm changes with conformity levels in the broader society, that we can observe transhistorical changes in conformity rates. This finding should be a caution that the work of social psychology never ceases because as norms change our understanding may also need correction.

The forces of conformity can be observed everywhere in our daily lives. People rise for the national anthem, move through courtesy rituals, or obey fashions or fads without great consideration or evaluation. Most people will go along with the crowd. Often there are conflicting norms within the same society, and how is that resolved? In the Hardy and Larsen study of women’s hemlines at a religious university, the resolution was a compromise between peer and institutional norms.

Preferred body images also demonstrate the powerful role of conformity, both normative and informational. There are cultural differences that determine the preferred female form. Where there is plentiful food a preference for thinness prevails, in societies that struggle for survival plumpness may signify fertility and well-being. Within our own society we can also observe how preferences have changed over time, with currently a preference toward an unhealthy extreme thinness as promoted by fashion magazines. These extreme norms are primarily responsible for eating disorders among young women and girls as they seek to conform to anorexic images. For men there is now also an obsession with images that reflects increased muscularity in western societies. The GI Joe figure popular among boys shows how the image has changed over time, along with increased aggressive militarist accessories. Boys are indoctrinated early on into militarism.

Research has shown the powerful role of minorities in overcoming mindless conformity. Strong and principled minorities are basic to social progress. Minorities have not only the ability to resist, but can also change the opinions of the majority. The style of the minority matters as the nonconformist presentation must be both forceful and consistent. If that is the case the majority may reevaluate its viewpoints and change. Minority views are especially beneficial for tasks that require novel solutions. The dual process theory suggests influences are different for the minority and majority. The minority influence causes a reevaluation and produces pressures to reconsider. The majority has the power to produce surface compliance without necessarily private acceptance.

Compliance requires among other things power. We have observed in human interaction many sources of power including coercion and rewards. Sources of legitimate authority and expertness, and the ability to alter the environment are other ways of encouraging compliance. Mood may also play a role since when you are in a good mood you are more likely to comply. There are also a number of ways to manipulate people to comply with a variety of requests. The purpose of these manipulations is to alter people perceptions of what is being asked and thereby increase the likelihood of the desired behavior.
We have also much evidence from both history and the laboratory of morally bankrupt behavior. Few people (except psychopaths) are prepared to commit evil upon demand. But when the group or national mind is prepared by propaganda the results may be destructive of an unimaginable scale. Propaganda shapes the perceptions that allow for evil whether among the Nazi’s of the past or in contemporary society.

The genocidal behavior of the Nazi’s did not end an era of human cruelty; it was but a chapter in the continuous brutality of the world. The dimensions of the cruelty of the holocaust led to the debate as to whether those participating were exceptional (being sadists or psychopaths), or average normal persons. The latter is considered the more frightening “normalist” position explaining that ordinary people perform evil on the scale of genocidal behavior. Milgram addressed this issue in his teacher-learner experiment. What he discovered was that the average person obeyed the experimenter’s command to shock an innocent victim even when it could cause great harm or possible death. This obedience paradigm was followed by the Larsen et al. experiments on situational conformity, where the researchers showed that they could obtain comparative compliance by the mere influence of the situation. In no case did the experimenter in the Larsen experiments command or encourage compliance, and the results can be considered an even more devastating statement on people’s ability to maintain their independence. It is important to remember that genocides rarely require direct commands. Most are carried out through the willing participation of otherwise normal people. In the Larsen et al. experiments only the presence of an apparently legitimate situation had the required influence. In situational conformity we could observe both informational and normative pressures. The situation was somewhat ambiguous and created a situation of conflict between socialized norms to not hurt others, and the demands of the situation to complete the experiment. Informational conformity was reflected in the responses to models that served a social learning function in the experiments. Normative pressures were also present in the desire to please the experimenter and peers.

The Larsen et al. experiments returned to the issue of personality, raised but not answered by Milgram. The results showed no relationships between measures of aggression and hostility on the one hand and compliance on the other hand. However, a separate study did produce higher levels of shock administration by those participants high in need for approval. In these experiments as in real life the participant was seduced by the step-by-step procedure. These step-by-step procedures are also used to train those who use torture to extract information. Creating empathy with the victim on the other hand decreased the level of shock in Milgram’s studies. Sadly that has little effect in modern warfare, as there is little proximity to victims who are killed by bombs or missiles.The important question is what you would have done in these experiments. Despite protestations to the contrary nearly everyone who started the experiment completed it. The results lend support to the normalist position, that ordinary people can and do behave in ways harmful to others, and will often act contrary to their personal morals and values. We do not understand this in our society due to the fundamental attribution error, where we overestimate individual dispositions in behavior, and do not recognize the power of the situation to seduce compliance. While there are some cultural differences it should be remembered that the shock experiments were carried out in so-called individualistic societies and not in collectivist cultures. There is however, something more basic than culture, the universal human need for approval and acceptance.

As we now know the Milgram experiments produced a storm of criticism within psychology. The issues raised concerned the protection of the participants from self-discovery that in the critique’s mind impacted self-esteem. In fact follow up results showed that there was no harm done to the participants, and they might even have had the benefit of being inoculated against blind obedience or mindless conformity. Sadly the controversy has also resulted in directing research away from crucial issues like genocidal behavior toward more innocuous issues of little relevance to the human condition. The name of the new conformity is “political correctness” that produces mindless conformity to the point of absurdity in academia. However, laboratory aggression studies are classic as they possess lasting value. In the long distance future students can still learn of the ease of manipulation, and the potential willingness of ordinary people to participate in harmful behavior.




Being Human. Chapter 9: Hostile Inter-Group Behavior: Prejudice, Stereotypes, And Discrimination

Prejudice is a common attitude in all cultures and societies. We only have to look at the headlines of a daily newspaper to see the dimensions of destructive behavior as a consequence of prejudice. Recent history has seen the liquidation of millions of people as these victims were dehumanized by prejudice allowing for their annihilation. In Europe we thought that after the massacre of the Second World War people would have learned the sad and terrible lessons of prejudice. However, since then we have seen the destructions of thousands of people in former Yugoslavia where Christians killed Muslims and vice versa.

Some group differences may be important, but most stereotypes underlying these killings are based on myths of no real consequence in truth. Religion rather than being the great unifier has provided the ideology for killing regardless of culture and society. In India and Pakistan, Hindus are pitted against Muslims. In Palestine those who identify with Jewish ancestral myths are pitted against those who believe in Muhammad. In Rwanda the ethnic Hutu’s are against the Tutsi’s. The list goes on and on, encompassing all societies.

The Vietnamese have reservations about the Chinese, the Chinese think ill of the Japanese. Can you think of any society which does not display negative feelings toward other ethnic or national groups? Do you remember the conflicts in East Timor, the continued struggle in Kashmir (Hindus versus Muslims), in Sri Lanka (Muslims versus Buddhist), the struggle in Northern Ireland within a single religion (Protestants versus Catholics), and Iraq (Shia versus Sunni)? All these examples demonstrate intergroup enmity as a prominent and decisive element of the human condition.

Within society, there is also prejudice. Many, if not most societies, display gender prejudice against females. Under China’s one child policy, more boys are born than girls. One result is the presence of many lonely men when the sexes grow into adulthood. In India parents seek to know the sex of a prospective child, and female fetuses are often aborted. Unequal salaries between the two genders continue for equal work in many societies. In the western world we also observe prejudice toward those who do not fit ideal body images. Fat people are viewed negatively, and unhealthy thin body forms are promoted as we have seen in chapter 3.

All minorities are subject to some prejudice. The US has is a long and distressing history of prejudice toward ethnic nationalities and minorities. The prejudice toward the native (Indian) population initially led to attempts to use them as slaves. When they proved unsuitable for that, native societies were largely destroyed and survivors placed in controlled reservations. The long and painful history of slavery in the US is known to all. This ended only with the civil war in 1865. The legislation which followed ensured that black people were kept segregated in inferior status and allowed for their continued exploitation. Only in the 1960s did the civil rights movement put an end to the worst visible forms of discrimination in our society. However, even today Black people continue to bear the consequences of a prejudicial society. Poverty, poor housing, disease, and crime continue to afflict those who live in America’s racial ghettos. Similar results of prejudice can be found in other nations which also have produced divided and segregated communities.

The presence of prejudice can also be observed in the many derogatory terms used against nationalities in the US. Hispanics are called spics, greasers, or wetbacks; Asians are described with words like slants, slopes, chinks, or japs; Blacks are called niggers, coons, jigaboos, or jungle bunnies; Germans are stereotyped as krauts, and Italians, as wops or dagoes. During the war on Vietnam, the Vietnamese were called gooks by the American soldiers. These terms are all pejorative words used to denigrate the human value of these national groups. Together these words serve the cause of prejudice by increasing social distance between groups and thereby allowing for the brutalities. Every society can find similar prejudice toward their ethnic and social minority groups.

Not only minority groups are targeted, the dominant groups are also subject to prejudicial distortion. Prejudice is indeed a two way street, where any group can be subject to common ignorance. Today the US is still dominant in the world. However, Americans are also subject to prejudice (Campbell, 1967). Americans are seen by the British to be pushy and excessively patriotic. Some of these stereotypic views are very resistant to change, as certain views have been present for several centuries (Schama, 2003). The prevalence of prejudice suggests that it is part of the human condition. Is that true? If true, we could do little to change the conditions of hostility in the world. As we shall see, prejudice is complex, but is largely learned and can therefore be unlearned.

With the complexity of human behavior, we are not likely to find any one theory or set of principles that can explain all causes of prejudice. Why is it present in every society? What can be done to ameliorate the effects of intergroup hostility? These are questions that will be addressed in this chapter. As we noted, prejudice is an attitude. Elsewhere we have noted that attitudes have affective, cognitive, and behavioral components. Larsen (1971a) demonstrated the importance of both the affective and cognitive components in making social judgments. These three components are also found in prejudicial attitudes. We call the affective component prejudice, the cognitive component which sustains the attitude is a stereotype, and the behavioral component is discrimination manifested toward the target group. Often the three components are just referred to in the social psychological literature by the inclusive term “prejudice”.

1. Prejudicial attitudes: The affective component
In the context of prejudicial attitudes, the term prejudice connotes negative affect toward the target group. It is true that one can favor a group and therefore have positive affect toward it, but in social psychology, prejudice is referred to as a negative phenomenon. When we say someone is prejudiced, this person has negative attitudes toward some group as a class of people. In practice this means that the prejudiced person pays little or no attention to individual traits or variations within the group, but describes all members as having similar undesirable characteristics. A person prejudiced toward blacks ascribes negative traits to the entire race, and will dismiss individual personality traits as unimportant. In the presence of a targeted group, a prejudiced person will feel negative, and dislike the group as a whole. Negative feelings are not always expressed, as with changing social norms people may try to hide their true feelings.

2. Stereotypes: the cognitive component
All attitudes have a supporting cognitive structure. In the case of prejudicial attitudes, we call these stereotypes. We have schemas of other groups which are based on our selective experiences in society. In the past black people were shown in American movies and other media in subordinate positions as servants or doing menial work. Our stereotype of black people is therefore less than flattering, and many think that being uneducated is the natural condition of black people.

Once incorporated, stereotypes are very resistant to change. Contradictory information is dismissed as the exception which proves the rule. When confronted with an educated black person, we split our prejudice into a new subset of the “educated” black. We continue to harbor our negative stereotype as the subset allows us to deal with exceptions. Some Nazi’s created a subset of “good Jews”, which allowed them to continue to support the German government and endorse the holocaust. When we stereotype, we simplify the world. It helps us process information before any interaction occurs. When we meet a black person, we do not have to know the person since our stereotypes will prepare our responses.

Stereotypes are primarily cognitive in function, allow for more efficient decision-making, and shorten our response time. Cognition that follows uses mental shortcuts or simple heuristics (see also chapters 4 and 8), that Black people are “lazy”. When using simple heuristics or similar stereotypes we need a minimum effort when confronted with representatives of the target group (Fiske & Depret, 1996; Jones, 1990). Stereotypes can be personality traits which describe unfavorable qualities of members of the other group. Black people are perceived to be ignorant, and so forth. Stereotypes can also take the form of attributions. If blacks are poor, it is because of personal dispositions like black people lacking a work ethic. We attribute motivations to many victims of stereotypes, explaining their poverty or ill health in terms that fit our conception of living in a just world: “People get what they deserve”.

2.1 The harmful effects of stereotypes
Recent research has demonstrated the harmful effects of stereotypes on the target group. The phenomenon of the self-fulfilling prophecy shows that when prejudiced people behave consistent to a stereotype and convey their expectations, the victims come to believe in the stereotype and act consistent with the expectation. The stereotype elicits behavior which confirms the stereotype for both the victim and the perpetrator. The stereotype that black people are lazy and unreliable may cause employers to be unwilling to offer employment. Unemployment in turn causes hopelessness in the black person, the belief there are no jobs, and subsequently the need to rely on welfare. The welfare dependency cycle is completed when white people act on their stereotypes, thereby reinforcing the expected behavior.

Research shows that victimized groups embrace stereotypes and often fulfill the predicted behavior (Snyder & Swann, 1978; Swim & Stangor, 1998). The self-fulfilling prophecy has been demonstrated in varying circumstances. It is a common stereotype to believe that people’s memory deteriorates with age. Many elderly believe it is true (Levy & Langer, 1994). Since this is a common belief in our society, many people act with that prejudice toward the elderly. Many jokes are made about “senile moments”, and the elderly comply with developing the expected memory loss.

Minority self-awareness is painful when living in a prejudicial society. Targets of prejudice are frequently aware of the stereotypes describing one’s group. Self-awareness causes apprehension when the minority person is confronted with a task related to the stereotype. White males competing with Asian males in mathematics do so knowing the common stereotype that Asians are wizards in math. Likewise females are aware of the common perceptions that they are inferior to males in mathematics. The stereotype offers therefore a plausible explanation for poor performance. This is today called stereotype threat, or stereotypic threat.

When victims of stereotypes feel under scrutiny or threat, the stereotype produces poor performance. Even females who are high achievers display lower performance when they are made aware of the common stereotype (Spencer, Steele, & Quinn, 1999). Stereotypes are by their very prevalence in society difficult to ignore, and the consequences are very real. The stereotype about racial differences in athleticism favoring blacks has similar consequences for white students. In one study white students were led to believe they were participating in a study on native athletic ability. Since the stereotype of white students is generally one of having less native athletic ability, whites also made less of an effort. They accepted the limits imposed by the stereotype (Stone, 2002). In one intriguing study of Asian women’s mathematical ability, the stereotype about racial differences had positive consequences when their racial identity was made salient. However, when the female gender identity was emphasized they did poorly (Shih, Pittinsky, & Ambady, 1999). There are many who believe the result of stereotypic threat is long term, and may even produce negative physiological reactions commonly associated with stress (Blascovich, Spencer, Quinn, & Steele, 2001).

2.2 Common stereotypes ignore overlap and individual differences
Some stereotypes seem harmless. As noted it is a common stereotype in America that black people are athletic, and this is the reason why some sports are dominated by blacks. Since there are many positives associated with athletics are there any negative consequences? The main negative result is that the stereotype ignores the overlap in abilities between the racial groups, and individual differences (Stone, Perry, & Darley, 1997). Although it is true that blacks dominate some sports like basketball, it is also true that there are many great white players, and indeed players from any race. The stereotype is not fair to any group, because it assumes that black students should concentrate on sports, and the athletically gifted white student should choose academics. The stereotype limits the potential of all groups.

Gender stereotypes also limit the potential of both males and females. There are acknowledged biological differences between the sexes, and most of us are grateful for these complementary traits. Some traits evolved from the evolutionary need to specialize tasks during the course of the development of the human species. Women have the assignment by nature to bear children. Those who are good mothers help their gene pool to continue, as their offspring has a greater likelihood of surviving (Buss & Kendrick, 1998). These powerful biological causes may have produced greater nurturing in females, and contributed to the stereotype of female nurturing behavior.

In all cultures, females are accepted as more nurturant and passive (Deaux & La France, 1998). Research supports the presence of common perceptions of females as more socially adapt, more friendly, and more supportive. Men, on the other hand, are typically seen as more dominating and controlling (Eagly, 1994; Swim, 1994). The problem with stereotypes is that they limit both male and female behavior. There are indeed fathers who are very nurturant and supportive of their children, and some mothers who abuse their children. Common experience shows that there is an overlap in behavior between the two genders and room for individual differences. Still overall the gender differences in nurturing remain and are consistent (Eagly, 1996).

2.3 Stereotypes and discrimination
The effects of stereotypes go far beyond perceptions. They can and do affect female opportunity for employment, and her subsequent work related evaluations and success. Participants in one study evaluated a highly competent female physician. Male participants perceived her as less competent, and as having had an easy time becoming successful when compared to a male physician (Feldman-Summers & Kiesler, 1974). The female participants were more egalitarian and perceived that male and female physicians were equally competent, but that there was less obstruction for males to overcome. More recently similar results were obtained (Swim & Sanna, 1996). When men are successful people attribute this to native ability, whereas females are seen to rely on hard work. When men fail, it is considered bad luck or because they did not make sufficient effort. Failure for females is perceived to reflect lack of native ability, and therefore impacts negatively on self-esteem.

Victims of stereotypes come to accept the common beliefs. Socialization by parents, school, and society, passes on the common stereotypes about gender. In one study, mothers who had stereotypic beliefs about gender differences in math produced daughters who had the same mind set, and who subsequently performed poorly on math tests (Jacobs & Eccles, 1992). The mother’s acceptance of the negative math stereotype served as the self-fulfilling prophecy we discussed earlier.

Merton (1957) first used the term “self-fulfilling prophecy” to describe that the way we act toward the stereotypic target may encourage the behavior we expect. If we think blacks are hostile we may approach them with anxiety or weariness. To these restrained responses, blacks may understandably behave with their own distance and hostility. In a study on job interviews (Wood, Zanna, & Cooper, 1974) the experimenters noted that the white interviewers treated black and white applicants differently. When the applicant was black, the white interviewer increased the physical distance, and finished the interview earlier when compared to white applicant interviews. The interviews were rated, and collaborators trained to interview a new group of white applicants the way the black applicants were interviewed. When the white applicants were treated the same way as the black applicants were in the first phase, the white applicants were also evaluated negatively. The physical distance and indifference produced the same behavior in white applicants as in black applicants. The self-fulfilling prophecy suggests that through our expectations we elicit and reinforce the stereotypic consistent behavior.

More serious consequences result when the prejudiced person is required to make quick judgments about the target group under stress conditions. One common stereotype is the presence of a large criminal element in the black community, and the proneness to violence among black men. If you were a white police officer would that stereotype affect your behavior when making an arrest? One experiment studied the effect of the black criminal stereotype on reaction time in video game shooting. The participants were presented with symbolic representatives of both black and white stimulus persons, and told to shoot those who were armed. The results showed shorter reaction time toward the black person holding a gun, than a similar white target (Corell, Park, Judd, & Wittenbrink, 2002). The reaction time was consistent with the stereotype, and could have serious consequences for young black men who might appear threatening to arresting officers.

2.4 Functions of stereotypes
We categorize people according to the common beliefs in society. Stereotypes are communicated and socialized through the media, traditions, and our educational system. Stereotypes do not allow for the evaluation of the individual, but attribute to the entire group what we think are common characteristics. Stereotypes help make the world more simple, otherwise we would have to stretch our minds when trying to understand the targeted individual. It is the lazy man’s response to the bewildering array of information presented by many different representatives of the same group. Consequently, stereotyping requires the least or minimal effort (Allport, 1954). It is similar to the heuristics rule of thumb discussed earlier in the chapter on cognition.

Is there some truth to stereotypes? A grain of truth is present in stereotypes, but they are generalizations which do not take into account individual variations. Also stereotypes do not allow for an evaluation of the history that brought about the “grain of truth”. Perhaps some females do poorly on math tests when compared to males, but there are historical explanations which are unrelated to native ability or intelligence. Yes, there is more crime in black neighborhoods, but there is also more poverty. There is some truth, but the stereotypes do not offer explanations. They serve only to simplify judgment and decision-making. Stereotypes overemphasize negative or positive traits, and underestimate the variability which is present in all social groups (Fiske, 1998).

3. Discrimination
The third component of any attitude refers to behavioral consequences. These have also been referred to above, as it is difficult to separate the components of attitudes. Now we focus directly on the discrimination suffered by the victims of prejudice. Discrimination proceeds from the very common ethnocentric assumption that the groups to which we belong are better on some criteria than out-groups. We shall discuss the in-group-out-group phenomena in a review of the minimal group research. More broadly, these feelings are described as ethnocentrism, the belief that our school, church, religion, and nation are superior to all others. The most extreme example of ethnocentrism was found in the Nazi campaign to promote subhuman stereotypes of all socially undesirable groups

The world presents a history of discriminatory behavior. During the Second World War the American government sent 120,000 Japanese Americans to camps, purely on racial grounds. No individual review was performed and all were treated alike. Yet there was no reason to suspect that these Americans were a threat to the nation. In the McCarthyite period that followed the war, thousands of Americans lost employment and were otherwise persecuted purely for reasons of their political beliefs or for associating with unpopular groups. The Federal Bureau of Investigation (FBI) had particular assignments to follow and intimidated political dissidents, a pattern which continues till this day. This is the historical legend of the US. More recently Pettigrew (1998) has reviewed the substantial body of research on prejudice against and discrimination toward new immigrant minorities of Western Europe.

The in-group-out-group distinction applies equally to all groups. In one study white and black participants evaluated applicants for employment, and made some attribution why the person had been fired or lost their previous job. White participants made more favorable evaluations and attributions of white applicants, and blacks held similar views on black applicants (Chatman & von Hippel, 2001). This discriminatory assessment has been found for other groups as well (Munro & Ditto, 1997). Even the mere innocent exposure to a stereotypic target can bring negative evaluations. Just sitting next to an obese woman produces negative evaluations of applicants for jobs (Hebl & Mannix, 2003). Stereotypes have survival implications for those in the targeted group, and those with whom they associate.

Discrimination occurs because society gives permission. Many societies tolerate sexist humor, because while funny, it also puts women in their place. Do funny sexist jokes have other consequences? Some suggest that funny sexist jokes put the mind at ease, and therefore prepare the way for discrimination. Much discrimination is disguised as norms about gender and race. These norms have changed drastically over the past three or four decades. Resulting ambiguity can make a targeted person feel unsure if rejection is discrimination or the consequences of some personal failure. When we know that negative decisions are the result of discrimination, we can accept that for what it is, and it does not impact our self-esteem (Crocker, Major, & Steele, 1998). However, in many cases, discrimination is not so clear-cut. When a person is not retained or promoted, self-doubt may exist since the perpetrator usually covers his tracks with elaborate rationalizations. In his study Van Beek (1993) showed that lower skilled unemployed job-seekers on the Dutch labor market are primarily selected by employers on the basis of characteristics that they cannot influence themselves, like age, gender and ethnic background.

Racial discrimination is all too real in our society. The treatment of psychiatric patients was influenced by race in one study (Bond, Di Candia, & McKinnon, 1988). The hospital used two methods of restraining the patient’s violent behavior. One easy way separated the patient in a room whereas the other harsher method used straitjackets or drugs to tranquilize the patient. In examining the records of the all white staffed hospital results showed that the straitjacket and drugs were used four times more frequently on black patients than whites. This discriminatory treatment was used despite any difference in violent behavior between white and black patients. It seems clear that the white staff had a stereotype of black violence, which translated into a harsher reaction to any problems by black patients.

If you are a member of a minority group, the results can be very negative in areas of great importance to you and your family. In one study Larsen (1977b) investigated discrimination against Aborigines in Australia. Three areas important to the daily life of Aborigines were access to jobs, housing, and equal treatment in restaurants and public service venues. The method of the study involved sending out a white stimulus person to ask for the positions and services thereby knowing the availability. Subsequently an Aboriginal person of same age, dress, and gender was sent to the same location within a short time interval. The results were truly astounding. Most establishments refused to consider employment for Aborigines, or renting housing facilities. Even in public bars the service was discriminatory as Aborigines found themselves ignored by waiters, or delayed in getting service. The study got the attention of the Australian parliament which debated the merits of the civil rights legislation which at that time contained few sanctions for discriminatory behavior.

Other social groups such as sexual minorities have also been subject to discriminatory actions, and are usually not protected by any legislation. Some research has shown that visible individuals from these groups are treated as pariahs in job application procedures (Hebl, Foster, Mannix, & Dovidio, 2002). Although society has experienced many changes with respect to sexual norms, discrimination continues to affect the daily lives of many.

4. Changing social norms
We live in a world that has experienced massive migration over the past decades. More and more people have met representatives from other races and ethnic groups. Contact by itself does not improve intergroup prejudice, but may remove some of the most extreme stereotypes. In the southern part of the US, a great amount of contact occurred between slaves and slave owners, but this did not improve the attitudes of the white owners. On the contrary, contact reinforced bigoted attitudes about the natural place of blacks in society, and the natural born rights to own and exploit human beings. Part of racist ideology was the belief that blacks were not fully human, and in census taking they represented but a fraction of whites. On the surface racial bigotry has plummeted since the 1950s when support for segregation was high (Hyman & Sheatsley, 1956).

The devastating effects of racial norms could be observed in the preference of little black girls for white dolls. The implication was clear, white was better (Clark & Clark, 1949). The negative impact of racist norms on the self-esteem of black people encouraged change, as did the “black is beautiful” movement. A later study showed that black children increasingly preferred black dolls, and there was an acceptance in the black community that there were no important native differences between blacks and whites (Jackman & Senter, 1981).

4.1 Gender stereotypes
Beliefs about gender are deeply rooted in biology, history, and culture. It should not surprise us that gender stereotypes are still with us, and are resistant to change. There are those who would argue that gender based beliefs are stronger than racial stereotypes (Jackman & Senter, 1981). Males often view themselves stereotypically as more dominant and assertive, whereas females see themselves as more compassionate (Martin, 1987). Both genders accept the prevailing stereotypes.

However, gender based attitudes are also rapidly changing. From the common accepted position of women as homemakers, attitudes now reflect the modern reality of women in the work place (Astin, 1991). The self-depreciation that was part of women’s psyche in the mid century had largely faded by the 1980’s (Swim, Borgidia, Maruyama, & Myers, 1989).

4.2 Prejudice in intimate relationships?
The concept of social cost is defined by the approval or disapproval by significant others for interaction with targeted groups. People are aware of and sensitive to social costs, and it affects hostile and aggressive behavior (Larsen, Martin, Ettinger & Nelson,1976). Disapproval (or social costs) from significant others is greatest for intimate relationships like marriage. Larsen (1974e) and Larsen, Ommundsen, & Larsen (1978) investigated the relative importance of social costs, dogmatism, and race, and found social costs to be the most significant variable affecting relationships in Norway as well as the US. They used the Bogardus Scale which was essentially a scale of decreasing intimacy ranging from choosing the targeted person for marriage to wanting to exclude members of various ethnic groups from the nation. You might not mind an immigrant coming into your country, you might even condone working with immigrants, and having them participate in social life. However, you might also demand your daughters to marry someone from your own ethnic group. In the most intimate relations, racism is alive and well, and present in nearly all cultures and societies (Sharma, 1981). Intimate relations contain the greatest potential social costs, as most people conform when disapproved by our closest significant others, our parents and our family. Some twenty years ago fifty seven percent of white US respondents would be unhappy if their children married a black person (Life, 1988). The trend is away from these remaining barriers, but it is interesting that intimate relationships are the last remaining barrier to full equality. For example students at the end of college felt more pressure not to date members of other racial and ethnic groups (Levin, Taylor & Caudle, 2007).

4.3 Subtle bias in racial and gender relationships
Changes in social norms have changed racial and gender stereotypes, it is no longer profitable to be a bigot. There was a time in America, from the colonial times to the 1960’s, when you could not be elected to even the lowest office unless you displayed bigoted attitudes. Now there are laws and an emerging social consensus that discourages blatant display of prejudice. Perhaps this is just another way of saying that most people are conforming to new social expectations. They want to avoid punishment or gain the approval of society as contained in the social cost concept. However, conformity is surface behavior. A person may continue to harbor negative feelings and stereotypes underneath the conforming behavior.

Subtle racism, or prejudicial gender attitudes, can be determined by the bogus pipe line method where the participant believes that the experimenter can read the person’s true attitudes by the use of a sensitive “lie detector” test (Jones & Sigall, 1971). The participants in the study were assigned to either a traditional survey method of attitudes, or the bogus pipeline where they were instructed that the machine could detect if they lied. Knowing that they would be found out, participants showed more prejudice in the pipeline condition.

Similar results were found for gender-based attitudes. On surveys men and women had very similar attitudes on gender related issues. When using the pipeline method, men showed considerably less sympathy for the cause of gender equality (Tourangeau, Smith, Rasinski, 1997). However, even in using traditional methods of surveys, we can still observe subtle racism and prejudicial gender attitudes (Swim, Aikin, Hunter, & Hall, 1991).

In this “modern” form of prejudice, bigoted people are just more careful in expressing their views. No one wants to be labeled a racist as today it can have negative consequences and connotations. At the same time, when the racist is in comfortable company, these prejudicial views are expressed. Subtle prejudice is a whole new arena for social psychologists to study and to try to understand the remaining intergroup hostility (Dovidio & Gaertner, 1996; Pettigrew & Mertens, 1995).

An important tool in achieving racial equality in education is the use of busing students from racially segregated communities to racially integrated schools. Some studies have shown that most white parents accept the busing of their children from one white institution to another, but object vigorously when the educational system uses busing for interracial integration.

Perhaps old-fashioned racism is on the wane in the United States and Europe reflecting normative changes and conformity. Race relations remain hostile however, but are expressed in more carefully and subtle forms (Kinder & Sears, 1981; McCanahay, 1986; Haddock, Zanna, & Esses, 1993; Swim, Aiken, Hall, & Hunter, 1995). Modern racism rejects past beliefs in the racial inferiority of blacks, and other outmoded stereotypes. These outdated views are supplanted by more modern beliefs which sustain prejudice. Some contend in self-righteous anger that blacks through affirmative action are undermining self-reliance and fundamental family values. Modern racism depends heavily on dispositional affirmation where racists see minority disadvantages as caused by personal inadequacy and not by situations of poverty and discrimination. The disproportionate share of welfare assistance to blacks, and the crime rates in black ghettos, are viewed as the consequence of personal inadequacy, and not brought on by unending discrimination. So on the surface of life racial norms have changed since many bigots reject blatant racism, yet embrace subtle racist beliefs. It is an irony that egalitarian values can coexist with prejudice toward minorities (Gaertner & Divido, 1986). This apparent contradiction occurs because of the beliefs that unequal treatment has dispositional causes. The cause of unemployment among black people is attributed to black people being uneducated or lazy. Since racists generally benefit from the status quo in society, it should not surprise us that they favor the dominance of the in-group (Sidanius & Pratto, 1999). Modern racists will operate within the norms of our changing society, but will not help in improving the lot of minorities, and depending on the specific situation, may hinder attempts to improve intergroup relations.

Several studies have demonstrated the functions of modern racism. In one study, participants were led to believe that they were the only ones able to help a black victim. In that situation, they came to assist the black victim slightly more times than a white victim. However, when the participants thought others could help the black victim, their implicit racism dominated. (Gaertner & Dovido, 1977). In that condition, they assisted a black victim less frequently than a white victim (38 % versus 75 %).

Another study viewed the implications for employment. Prejudiced and unprejudiced participants rated black and white applicants for employment the same, when they had the similar credentials on all pertinent variables. However, when one applicant had variable qualifications, so they excelled on some but not other characteristics, prejudiced participants rated black applicants less favorably (Hodson, Dovido, & Gaertner, 2002). The varied credentials allowed the prejudiced person to favor some credentials and not others, but always at the expense of the black applicant. The variable credentials supplied the cover which allowed the prejudiced person to rationalize his racism. Under conditions of variable credentials the bigot can pick and chose what is important, and make biased judgments without offending his self-perception as a fair person.

At the beginning of the chapter, we mentioned examples of intergroup hostility from various regions of the world. The history of the world is one of continuous warfare fed by stereotypes and prejudice toward supposed enemies. Norms may change, and the most blatant forms of discrimination cease. However, an underlying reservoir of hostility may remain to be tapped at a time of future conflict. Research on prejudice in Europe shows similar patterns to those of the United States. Subtle forms of prejudice also exist in Europe, as it too has experienced changing norms over the past few decades (Pettigrew, 1998; Pettigrew, Jackson, Brika, Lemaine, Meertens, Wagner, & Zick, 1998).

In a world where illegal immigration is becoming an increasingly controversial issue (Van der Veer, Ommundsen, Larsen, Van Le, Krumov, Pernice, Pastor Romans, 2004; Ommundsen, Van der Veer, Van Le, Krumov, & Larsen, 2006) it should come as no surprise that we see examples of both subtle and blatant forms of prejudice (Meertens & Pettigrew, 1997; Pettigrew & Meertens, 1995. These studies included both measures of blatant and subtle prejudice. In one study, those who scored high on blatant prejudice wanted to send the illegal immigrants home. Those who scored low on both scales wanted to improve the lives of the immigrants, and had a tolerant outlook toward them as their fellow human beings. Those who scored high on subtle prejudice did not approve of sending the immigrants home, but on the other hand did not want to do anything to help or improve their lives (Pettigrew, 1998). Subtle prejudice may therefore have an effect through crimes of omission rather than commission, through acts of indifference rather than overt acts of discrimination. In either event, the outcome is negative for the targeted group.

Modern forms of racism may be even more potent than blatant prejudice. The underlying attitudes can by rationalized by well-established values such as social equality. Why should affirmative action benefit racial minorities and women? Many whites object, not on racial grounds, but because they see affirmative action as “unfair” discrimination toward poor whites and other groups, and insulting to values of equal treatment (Tarman & Sears, 2005; Sears & Henry, 2003). Whether it is called modern racism (McConahay, 1986) or racial resentment (Kinders & Sanders, 1996), a reserve of ill will continues to be directed toward minority groups. Many whites have negative feelings toward ethnic minorities, and what they consider demands for special treatment. Modern racists view for example blacks as lazy, and believe they violate American values of thrift and hard work.

There are researchers who believe that racial attitudes have been replaced by concerns over issues of merit, and the value of color-blind equality (Sniderman, Crosby, & Howell, 2000). These assertions are modern forms of racist ideology, and provide the justification for continued racial inequality. Racism can be observed in the modern racist’s opposition to black leaders and against affirmative action (Sears, Van Laar, Carrillo, & Kosterman, 1997). Is racial prejudice just an issue of past history? Most of the evidence would not support that perspective.

In the case of gender prejudice the norms have also changed. Are there still more subtle forms of gender bias in society? By choosing which traits we consider important in females, we can still observe subtle but powerful effects on gender equality. Many men have ambivalent attitudes toward women. Ambivalence can be expressed by saying that women are less competent and intelligent than men, but they are more kind and warm human beings and have greater interpersonal skills. Glick and Fiske (2001) studied ambivalent sexism in a study of 15,000 men and women in 19 countries. They found support for the presence of a chivalrous sexism which included positive and protective attitudes toward women who occupied traditional gender roles of wife and mother. At the same time, the men manifested hostile sexism toward those women who were seen as usurping traditional male power. These ambivalent attitudes are particularly difficult to change, since there is ample rationalization for the prejudiced man to claim he has “positive” attitudes toward women, and wants to protect them. The chivalry allows the sexist person to deny feelings of hostility, but still prevents gender equality. Whether sexist or racist, the ambivalent person supports the status quo by favoring those blacks and females who occupy the traditional roles of servant, and treating those who deviate from that image with hostility.

Many today deny that prejudice still exists toward women. Some men feel resentment toward the demands that women make. In a competitive society; men perceive that they are losing out by the advancement of women (Swim, Aiken, Hall, & Hunter, 1995). The feeling of unfairness fuels active opposition to affirmative action for females.

4.4 Subtle measures of authentic attitudes
How can we measure a person’s authentic attitudes toward minorities? In the “bogus pipeline” study mentioned above, subjects were led to believe that a lie detector would reveal when they were lying. Consequently participants admitted to much higher rates of racism (Jones & Sigall, 1971). Another technique is called the Implicit Association Test (IAT). This test aims at uncovering prejudice among those who claim to be unbiased. The measure is based on reaction time to visual stimuli (Greenwald & Banaji, 1995). A series of pictures and words are presented on a computer screen (e.g., black faces and negative traits or white faces and positive words). The participant is asked to press a key with either the right or left hand depending on whether the stimuli conform to one or another rule. The basic argument is that reaction time will be shorter when the picture and words are consistent in the participant’s mind. If the black face is followed by positive words, the prejudiced person may hesitate, and this hesitation can be a measure of unconscious prejudice. To put it another way, unconscious prejudice toward black people can be assessed by the difference in reaction time between black faces with positive words and black faces with negative words. If there is no prejudice present, there should be no need to evaluate the positive words and reaction time would be the same. Out of the million responses to the Web version of the IAT, about two thirds of the white participants show prejudice (Nosek, Banaji, & Greenwald, 2002).

In other studies using priming methods employing pictures of a minority person followed by words that belong or do not belong, reaction time is used to assess prejudice. Many people deny the presence of prejudice, but nevertheless show reaction times that indicate the presence of these attitudes (Bessenoff & Sherman, 2000; Dovido, Kawakami, & Gaertner, 2002; Fazio & Hilden, 2001). The more blatant aspects of discrimination and prejudice have been removed from people’s lives as a result of changing norms. Nevertheless, people have maintained many prejudicial attitudes even if they do not dare to show these openly. There is still much ill will in the world, and much must be done to create societies free from prejudice, stereotypes, and discrimination. To work on these issues we must understand how we come about developing prejudice.

5. Causes of prejudice
This section examines the major ideas which explain prejudice. Some researchers emphasize the importance of early learning. Social inequality motivates prejudicial behavior, and rationalizes prejudice. Realistic group conflict weighs the importance of competition in a world of scarce resources. Many people are frustrated, and take out their anger on minorities as described by scapegoating theory. Group categorization theory research shows that in competitive societies even trivial groups produce in-group bias. Social dominance theory describes our world as a hierarchy of winners and losers. Those dominant fear loss of status and real advantage in the struggle for equality. In social conformity theory, prejudice is an outcome of the desire to get along in communities with prejudicial norms. Social institutions lend support through the mechanism of segregation in access to education as for example in Saudi Arabia and other Muslim countries. In western societies there are jobs considered unsuitable for women like being CEO’s of large companies. Personality dynamics points to the authoritarian personality and belief incongruence as instrumental in producing prejudice. Social cost is an integrating variable underlying personality dynamics and conformity.

5.1 Theories of learning: The socialization of prejudicial attitudes
None of us are born with prejudicial attitudes. Prejudiced attitudes are formed through socialization at the home, in school, in the community, and through culture. This is an optimistic statement, because what can be learned can also be unlearned. Learning theories are essential concepts in understanding how some people become bigots and others are tolerant. If a child grows up in a home where the parents are prejudiced, the child may socialize these attitudes by simple imitation. Social learning theory describes how children learn concepts and attitudes by watching the behavior of significant others. If a father or mother uses pejorative words in describing racial groups, then the child will be influenced and accept this version of reality. Likewise teachers and other significant people are powerful role models for children who lack the critical faculties with which to question prejudice.

The community also plays a powerful role in shaping behavior. Many people are prejudiced just from a desire to get along in a prejudicial community. In the United States the South was the traditional repository of prejudice and bigotry. Prior to the civil rights movement in the 1960s, a person was in danger of ostracism or worse if he expressed tolerant attitudes toward black people. Prejudice was functional to obtaining social rewards and avoiding disapproval. As we have seen, this blatant attitude has been in retreat for some decades. However, the more subtle forms of prejudice may still be reinforced by norms in the community. Since the community cannot reject for example black people as a category they can do so indirectly. Noting the unemployment, crime, and prevalence of AIDS among black people in the US, and attributing these to dispositional (personal) causes, is a key ideology of current bigots.

Reinforcement theory is a learning theory which asserts that behaviors followed by reinforcement are strengthened and will therefore be expressed on future occasions. The values of parents and the community play a role of reinforcing even subtle attitudes. Classical conditioning theory also plays a role, as we may come to associate positive or negative concepts with gender or race.

5.2 Early learning of prejudice
Normative prejudice is learned very early in life. As early as 4 or 5 children begin to discriminate between racial groups, and understand the dominant community norms with respect to race. Some groups may not be salient for some children, as racial, ethnic or national minorities are often segregated. However, by age 7 children are generally aware of the dominant norms in regard to all major groups (Aboud, 1988). The reason early socialization in prejudice is so important is that once learned prejudicial attitudes are not easily changed (Sears & Levy, 2003). Prejudice serve selective perception, traits which conform with the stereotype are remembered, the rest discarded. The power of early socialization was shown in the study by Miller and Sears (1986). The norms where the child grew up have more powerful effects in later adulthood than other and later experiences like adult occupations or regional attitudes. Freud said “the child is the father of the man”. By that he emphasized the all powerful effects of early childhood experiences. The literature on prejudice tends to confirm this viewpoint. As the child grows up he is reinforced by the community for expressing the accepted prejudicial attitudes. For the most part this occurs at low levels of awareness and reflection.

5.3 The media and social learning
The media provides a forum for the social learning of prejudicial attitudes. Many who grew up in the United States would remember the old Andy and Amos show which utilized black actors in very stereotypic happy-go-lucky terms. Minorities are often described in old movies in unflattering ways as servants or in doing other menial work. Although these stereotypes have changed in recent decades other problems remain. The lack of visibility of a targeted group supports ambivalent attitudes. If children and adults do not see positive role models of gender or race, it is easy to rely on subtle prejudice.

The appearance of minorities in the media is largely stereotypic. The New Yorker is known for its cartoons reflecting on society. Thibodeau’s (1989) study showed that less than 1 percent of the cartoon characters were black, and these were most often described in stereotypic roles such as doing menial work. Another study of television in 2003 showed that although the Latin population is now about 13 percent of the American population, only 4 percent of television characters were Latin (Hoffman & Noriega, 2004). Other researchers have shown that minorities are repeatedly depicted in unflattering terms on television shows, as being linked to crime (Pachon & Valencia, 1999); or taking advantage of society through welfare (Gilens, 1999). Is this stereotypic depiction in the media one reason that welfare funding is under attack? Do many whites think that undeserving blacks take unfair advantage of social support? The media rarely covers poor whites on welfare. Is the media supporting a stereotype of blacks as lazy and therefore undeserving? The media is a forum for social learning reflecting common social stereotypes and norms. After all script writers must get their ideas from somewhere, and look to their own attitudes and those prevalent in the community to describe social reality. The presence of stereotypes in the media can therefore be thought of as a subtle measure of prejudicial social norms.

5.4 Social inequality and prejudice
We live in a world of real or imagined scarce resources. In many places people lack sufficient resources in the struggle for survival. Competing groups may encroach on territory deemed essential to sustain life, as in the control of water or productive agricultural land. In other cases the scarcity is created by advertisement in modern capitalist societies. Many of the goods that people yearn for are based on desires that are manufactured in advertisement. How many people really need electric toothbrushes, or expensive perfumes? In capitalist society, envy is created by the lack of equality in consumption. Inequality in consumption led to the revolution of rising expectations which many felt caused the riots in black communities in the 1960s. The deprived in society have a unique window on what they are missing from television and modern communication. When desire is provided equally through advertisement, but consumption unequally, there is dissatisfaction and potential conflict. In social inequality we see the seeds of intergroup hostility.

5.5 Rationalizing social inequality
Life is a struggle over scarce resources. In that struggle some nations win out in the battle for improved standards of living, others fall behind, relatively speaking. Within a country, similar patterns of winning and losing are played out between social classes. Some people and classes are able to control and concentrate wealth, whereas others are struggling just to survive. Prejudice is one way to rationalize social inequality. The exploitation of slaves was justified on biblical grounds and as “the white man’s burden”. From that point of view, slaves were better off being confined, and white people did the slaves from Africa a favor by enslaving them. Likewise the building of empires was supported by prejudicial attitudes (Allport, 1954). The colonized people were seen as inferior, and colonization an altruistic act that brought civilization and improved the lives of the native population. The stereotypes we have of gender and race help justify discrimination. If women are paid less for equivalent work, it is because they do not work as hard, and they have their minds on the domestic scene.

Dehumanization and pejorative stereotypes follow discriminatory behavior. In extreme those who torture develop contemptuous attitudes toward their victims with the participants unable to discern any humanizing traits. By shocking or torturing, the perpetrators depersonalize victims and justify their behavior. The acceptance of waterboarding by the current US administration is due to the dehumanization of enemies as evil terrorists. The torturers in all societies rationalize their conduct by similar depersonalization of their victims.

Religion has been employed by some countries and communities to justify prejudicial attitudes. Several studies have shown that those who profess traditional beliefs are more prejudiced than those who see religion as an open-ended search for meaning (Gorsuch, 1988). Religion has been exploited in rationalizing prejudice throughout history. The German army went into World War I with belts on which were emblazoned the slogan “God is with us”. There is much in religious practice and writing that argues in favor of the existing social order. Some religions argue that God ordained some people to be poor and slaves and others to be rich and powerful. The Apartheid regime in South Africa in the last century was based on the interpretation of the bible by a white minority. In war, many religious organizations bless soldiers on opposing sides as they go about slaughtering each other.

Not all religions justify social inequality. For some adherents who are very devout, religion is not related to prejudice. Some religious people view religion as a means of serving mankind (Allport & Ross, 1967). Other religious people are open-minded in their search for truth and meaning (Batson, Bolen, Cross, Neuringer-Benefiel, 1986). Religious people put their lives on the line in opposing the Nazi regime (Reed, 1989). In making these distinctions between the dogmatic and the open-minded we see a difference between those who are religious for reasons of social conformity who tend to be more prejudiced, and those who are religious in an open-ended search for truth and service to their fellow human beings and are less prejudiced.

5.6 Realistic group conflict
Realistic group conflict theory maintains that conflict occurs because of the limited resources in society and the unequal advantage of some groups. The economic advantages of some groups lead to the support for stereotypes, prejudice, and discrimination toward the less fortunate (Jackson, 1993; Sherif, 1966). As early as 1938, Dollard documented the effects of economic competition on discriminatory behavior. As jobs grew scarce in the community, anger was directed toward new immigrants. We see similar results from various parts of history. Each wave of immigrants coming into United States has had to deal with discrimination as they threatened the jobs of the native born. These threats are currently being felt with now some 12 million illegal immigrants in the US, and millions more in Europe. People who feel most threatened by immigration, frequently poor whites, develop the most prejudicial attitudes. During the California gold rush, Chinese laborers came into the country in large numbers and competed for jobs with white miners. The resulting threat produced very prejudicial stereotypes, and the Chinese were described as primitive and depraved (Jacobs & Landau, 1971).

Realistic conflict theory predicts an increase in prejudice when the country experiences economic difficulties. In a classic study, Hovland and Sears (1940) examined the correlation between the price of cotton in the south and the number of lynchings of black people from 1882 to 1930. Since cotton was then the economic backbone of the southern economy, a drop in price signified difficult times for workers and the community. The economic frustration made it likely that deprivation of white workers would be expressed in aggression toward minorities. That is exactly what occurred. Whenever the price of cotton dropped, the number of lynchings increased (Hepworth & West, 1988). Did the poor black people have anything to do with the white people’s economic frustration? Not at all, other than the fact that both groups competed for the same resources.

5.7 Scapegoat theory
When times are difficult, and the culprit of frustration is not immediately apparent or too powerful, a scapegoat is often found. In Nazi Germany the scapegoats were the political and ethnic groups considered undesirable in society. Scapegoat theory is different from realistic group conflict theory. In Palestine Jews and Arabs are struggling over real resources in a non-zero sum game. Whatever one sides gains in territory is at the expense of the other. In scapegoat theory the source of the frustration is not easily identified, or otherwise too powerful to confront. In the case of poor whites and blacks struggling for survival, a realistic target of the frustration would have been the economic system and those who upheld the status quo in society. The system was responsible for the poverty of both whites and blacks. The system however was difficult to confront, and black people became a convenient substitute target. When a group is easy to identify, but unable to defend themselves, they become easy targets for scapegoating (Berkowitz, 1962).

One experiment created an experimental situation which made the participant angry. Subsequently, the subjects shocked a black confederate of the experimenter at significantly higher levels (Rogers & Prentice-Dunn,1981). When people are frustrated or angry, scapegoating becomes an easy substitute for the real targets of aggression. This is a tangible idea which finds support in many modern conflicts. In Eastern Europe the collapse of existing societies brought along great economic uncertainty and worry. These societies have seen an increase in chauvinistic nationalism, the growth of intergroup hostility, and attacks on those who can be identified as outsiders.

5.8 The Robbers Cave study
Perhaps our societies by their very competitive nature produce more or less automatic hostility whenever groups are formed. In the classic study by Sherif, Harvey, White, Hood, & Sherif (1961), Sherif and his collaborators investigated intergroup hostility in a Boys Scout camp. They succeeded in observing the boys as participant observers by posing as the maintenance crew in the camp. The researchers carefully noted the development of group relations as a consequences of competition. Many hours were spent initially screening a pool to find 22 boys who were equivalent on all significant dimensions. The participants did not come from broken homes, had no significant school problems, and were ethnically the same. This sample was then divided into two groups of eleven boys each.

Initially each group experienced considerable group cohesion as they enjoyed the varying camp activities. Each group chose a name for self-identification, the Rattlers and the Eagles. The experiment began as the boys were brought together for a tournament. The competitive part of the tournament brought on feelings of frustration as each group impeded the other from achieving coveted prices. Frustration brought on feelings of enmity and the two groups hurled insults at each other, burned the opposing group’s flag, and challenged members of the opposing group to fist fights and so forth. It appeared to Sherif that the mere presence of the two groups under conditions of competition brought on the intergroup hostility. If hostility can be created around such minimal competition which after all did not threaten the boy’s survival, how much more hostility can be created when intergroup competition occurs around issues that do threaten survival or group identification.

5.9 Group categorization: the in-group versus the out-group
Historically groups have served important functions for its members such as survival, identity, and self-esteem. Given these important functions it is no wonder that most of us develop a favorable bias toward our own group. When we identify ourselves with a group, the in-group, we at the same time describe those who do not belong, the out-group. In a competitive society that unfortunately is also associated with a negative bias toward all who are not “us”.

In fact it takes very little to create in-group bias, the mere membership of a group is sufficient. Early experiments concentrated on the minimal group categorization design. The experimenters sought to understand the minimum differences between groups required to produce in-group bias (Tajfel and Billig, 1974; Tajfel, 1970; 1981; 1982). By dividing subjects into arbitrary groups the distinction between the groups was minor. They were supposedly distinguished on the liking of abstract paintings. With this trivial distinction the experimenters could already create in-group bias.

In another study, Doise, Csepeli, Dann, Gouge, Larsen, & Ostell (1972) created experimental groups in the laboratory by asking the participants their aesthetic opinions of blown up pictures of blood corpuscles. These pictures were abstract and did not form a basis for making aesthetic judgments. We asked for these opinions so we could form two trivial experimental groups on the basis of their “aesthetic” preferences. All the participants (German soldiers) were asked to state their preference on a series of paired comparisons of these meaningless abstractions. After stating preferences, we removed ourselves as if scoring the results.

Following an interval we returned and stated that this experiment has been carried out in various parts of the world and people generally fall into one of two groups of esthetic preferences which we call X and Y. The discerning reader will now have observed that we created two nonsense groups based on a meaningless task. We then provided the participants with their group identification as randomly half of the participants were told they belonged to group X, the other half to group Y. Note that the participants did not know who were members of either group, only their own identification. On the basis of such meaningless group identification did the participants demonstrate in-group bias? The answer was yes. The participants were asked to describe members of group X and Y on a semantic differential attitude measurement, to describe each group’s physical traits, and to distribute money for participation in the experiment. The distribution of money could favor either group, or be distributed equally.

The results showed significant in-group bias consistent with the experiments performed by others (Wilder, 1981). On the basis of a meaningless group categorization, participants had more favorable attitudes toward members of their own group, described them with more favorable physical traits, and distributed more money to an anonymous member of their own group. In this minimal group design we emphasized again that the in-group bias was the result of a task asking bogus esthetic preferences, and without the participant knowing who in the room belonged to either group. If it takes so little to create in-group bias, how much more bias is present toward groups which are meaningful, like groups formed by gender, religion, or political views.

Many other experiments have confirmed the in-group bias (Ashburn-Nardo, Voils, & Monteith, 2001). The participants know they are not making choices for themselves, that the money they distribute goes to an anonymous participant. Yet time and time again participants show favoritism toward members of the in-group. In-group bias is even manifested when conditions do not favor in-group outcome. Participants are willing to receive less if their choices lead to a lower outcome for the other group, showing the underlying competitive motivation. In a competitive society group distinctions are almost automatic (Brewer & Brown, 1998). In the real world the outcomes frequently involve much more than the mere distribution of money. The in-group bias has been found in both genders, and in many nationalities. However, the in-group bias effect is less in interdependent cultures where people identify more with the cultural group, and make fewer competitive distinctions (Gudykunst, 1989).

5.9.1 Groups and social identity theory
Groups serve complex functions in the psychological economy of the individual. Our sense of who we are is defined by our group membership (Hogg & Abrams, 1988). The groups give us a sense of belonging that is related to positive feelings (Perdue, Dovido, Gurtman, & Tyler, 1990), and our sense of well-being. Some groups may have little importance like those in the minimal group design. Other groups, however, are central to our understanding of meaning or our sense of security. These may be ideological in nature or express central values of the member in some other way. The stronger we are attached to a group the more likely we are to see competing organizations as threatening, and to react to that threat.

Perceived threats are strong if the values of the competing organization resemble your group values, but still differ from your group on some crucial dimension. “Civil wars” are always the most violent. Historically we can observe this during the civil war in the US, in the battles between religious groups (e.g. the Shia versus Sunni), or between related political organizations (Trostkyist versus Pro-Soviet parties). We act in prejudicial and hostile ways toward competing organizations (Crocker & Luhtanen, 1990).

Group identification is also important to our sense of self-esteem (Cialdini, Borden, Thorne, Walker, Freman, & Sloan, 1976). Cialdini and his collaborators recorded how often students wore school T-shirts when their athletic teams experienced victory or defeat. As expected, the students were more likely to wear school colors after victories, when they could feel good about their association with the school. When our group achieves important goals we bask in its reflected glory. Witness the Olympic games. The pride of an Olympic championship is not only shared by the players or spectators, but indeed by all members of the national group.

The commercial world has caught on to the possibilities of social identity. The marketing of Nike shoes for example uses the concept of social identity. There are few differences between Nike shoes (other than brand name) and shoes costing a few euros, but when an esteemed sports star is associated with the product, it encourages more buying. Fans feel that by wearing the clothing they partake somewhat of the identity of the successful athlete. On more personal levels, we seek to associate with successful people, since doing so offers social recognition and self-esteem. Tajfel and Turner (1979) showed that a person’s self-concept and self-esteem does not derive from individual achievement alone, but also from the groups to which we belong.

Since our self-esteem is derived from group membership, it logically leads to in-group favoritism. Fighting for the prestige of the group lifts our spirits and self-esteem. Some studies have examined this phenomena by testing for self-esteem after a participant performed some act favoring the in-group. Studies (Lemyre & Smith, 1985; and Oakes & Turner (1980), show that people feel improved self-esteem by engaging in in-group favoritism. Those who identify strongly with the group also take stronger offense when the group is attacked. Strongly attached people take criticisms personally (McCoy & Major, 2003).

5.9.2 Social dominance theory
Social dominance theory describes societies as hierarchies with some people as winners and others as losers. Several researchers have suggested that dominance is created because it brings about evolutionary success (Sidanius & Pratto, 1999). In hierarchical societies, those at the top have an interest in stable social relations. The socially dominant defend the status quo by controlling the political apparatus and organizations in a country. Those lower in hierarchy, on the other hand, have an interest in establishing equality. They work in organizations like unions that promote egalitarian relations. The dominance orientation has strong prejudicial consequences for ethnic minorities (Duckitt, 2003). The socially dominant favor social conformity at any price, and display tough mindedness in dealing with outcasts like illegal immigrants (Duckitt, Wagner, du Plessis, & Birum, 2002).

On some occasions dominant groups maintain their privileged positions through physical force. The guardians of the state might exercise coercive power when required. However, the less dominant groups can also be co-opted. People can be seduced by apparent benevolence, the “father” dictatorship, whether at home or by the nation. In Turkey for example the founder Ataturk was called the “father of the nation”. Jackman (1994) calls this benevolent paternalism.

On an interpersonal level many men are both paternal and dominant. Women are loved, but also told to stay in their traditional roles. In the privacy of the homes those who were “house” slaves during slavery were often treated like members of the family. This held true as long as they remained servants and stayed in their subordinate roles. Supporting ideologies were developed to justify the dominant role of master and slave owner. These dominance ideologies ascribed negative traits to the subordinate group in this case the slaves (Klugel, 1990). In racist ideology for example blacks were perceived as apathetic at work, and promiscuous in interpersonal relations. Nowadays the debate on racial differences focuses on differences in intelligence. This extends the dispositional attributions to genetic differences. In this modern dominance theory, blacks are viewed as genetically inferior. Such “scientific” explanations had historically also found support among certain religious groups in the selective readings of religious scripture.

Under competitive conditions there is always the fear that the dominated group will successfully fight for its place in the sun. In a zero-sum world of scarce resources, equality between groups means that the socially dominant lose out. Some whites worry that their lives will deteriorate when minorities are given equal rights. The dominant group may also perceive threats to the welfare of the entire group or class. Individual self-interest is not the primary factor in prejudicial attitudes (Sears & Funk, 1991). Group deprivation seems to aggravate people the most, not personal deprivation. As a group, whites fear threats from immigrants, even when they are not personally affected (Pettigrew & Meertens, 1995). The reason seems apparent. Personal deprivations can be attributed to misfortune or to being unfit for a job. Group threat, however, is more serious, it is something beyond our control.

Those who see competition as a major cause of prejudice do not think that people in advantaged positions will willingly give up their dominance. There are so many economic and other advantages that accrue to those who dominate society. Perhaps the apparent declines in blatant racism are primarily illusionary. Since blatant racism is socially unacceptable, bigots keep their own counsel. Underneath social politeness lurks the same opposition to racial equality and unfavorable attitudes toward minorities (Sidanius & Pratto, 1999). Whites try to avoid offending racial minorities, and may even compensate and treat blacks more politely (Schuman, Steeh, Bobo, & Krysan, 1997). Others, however, have found support for persistent racist attitudes in face-to-face interviews (Krysan, 1998). Whether attitudes are changing or not, for many whites the issue is resolved by conforming to social expectations.

5.10 Social conformity and prejudice
Our desire to belong and be accepted by our reference groups produces conformity whether in the family, the community, or the nation. Most people’s behavior follows the easiest path and expresses attitudes that correspond to group norms. When it comes to behavior towards minorities, people act more from a desire to get along in their communities than from individually felt hatred. Like already noticed in chapter 7, surprisingly normal people acted in the as guards German concentration camps and conformed to Nazi expectations in committing heinous acts, and in the process believed they did the right thing. The link between conformity and prejudice is well established. Pettigrew (1958) found that prejudice among whites in both South Africa and the American south were largely motivated by conformity to established community norms. People who were prejudiced were rewarded, and those who did not conform were shunned. Pettigrew showed that those members of society who were most conformist were also the most prejudiced.

Socially conforming people have strong desires to avoid sanctions from significant others, and avoid experiencing the social cost of defying prejudicial norms. Reitzes (1953) and Minard (1952) showed white miners displaying no prejudice in the mines where racial interdependency was required and accepted. At the same time, however, these miners lived in rigidly segregated communities above ground. The dual behaviors can best be explained by the different norms which governed the mines and the community. The conformity perspective argues that people are prejudiced because they want to be accepted by valued reference groups.

The institutions of society work to perpetuate the norms that allow prejudicial behavior to appear “normal”. During his work in Australia, Larsen (1977b) observed the effect of community norms on white discriminatory behavior toward Aborigines. The norms allowed for discrimination and prejudice, although challenged by the 1975 Anti-discrimination Act. When some white Australians let their guard down in confidential conversations, one could observe the normative support for many of the prejudicial attitudes (Larsen, 1978; 1981).

5.11 Institutional support for prejudice
The institutions of society lend crucial support to prejudice through the mechanisms of segregation. In the South of United States (just like during the Apartheid regime in South-Africa) public facilities were rigidly segregated until the civil rights victories of the 1960’s. Black people could not sit down in a restaurant and have dinner with their families, but might be fed through the back door. They could not drink from the same water fountain as whites, nor sit anywhere except in the back of the bus. School facilities were also segregated. The institutions of society conveyed the inferior status of black people to both whites and blacks. The fighters for black equality and freedom understood the institutional basis of racism. It is no wonder that the first assault on racism came during the “sit ins” in restaurants, and in the attempt to integrate the transportation system, by mixed groups of whites and blacks. The changes that followed the Montgomery bus strike, and integration efforts by the interstate freedom riders, came because the structures of segregation were undermined and destroyed by these efforts.

Today, most of these overt forms of institutional support for prejudice have been removed in US society. But it was not until year 2000 that a university in the United States ended its ban on interracial dating (CNN, 2000). However, that does not mean that there are not discriminatory norms still in place. There are still norms about minorities and women that prevent fair treatment in the workplace. These views persist despite laws that make discriminatory behavior illegal. Discriminatory norms just require the unspoken consensus within a company that blacks are not suited for managerial responsibilities, and a woman’s place is in the home looking after children and husband. Stereotypes still find their way into television programs and the movies (Shaheen, 1990) depicting minorities and women in stereotypical ways. Women for instance are still under represented in the media, being outnumbered by 3 to 1 (Bretl & Cantor, 1988; Lovdal, 1989). There are also new stereotypes created of the “fanatic Arab”, and “dangerous black criminals”, which at best represent over generalizations of social reality. Normative conformity continues because of the support in society (Pettigrew, 1985; 1991) and it’s resistance to change. Changing the institutional support for prejudice is the most crucial weapon in the arsenal of those who want to build a society free of discrimination. The removal of institutional support for racism in the United States allows for new norms that largely favor integration (Hyman & Sheatsley, 1956; Knopke, Norell, & Rogers, 1991).

5.12 Personal dynamics and prejudice
Some attitudes derive from differential personality development. We are not all equal in opportunity or childrearing experiences. Some of us have been favored by good fortune. Other people developed in harsh environments and suffered permanent insecurities as a consequence.

Sources for prejudice are found within individuals rooted in personality or our way of thinking. In a competitive society we gain status by ranking higher than others on socially valued dimensions. The ranking, in turn, is a source of self-esteem, and function to support our self-perception as valued members of society. In a competitive university, it is not the student’s individual achievement that gives pleasure, but ranking with respect to other students. Student competition has at least one detrimental effect. In academically competitive environments fellow students are not looked upon as resources, but as competitors for a place on the ranking order of excellence.

When threatened, status conscious people may respond with prejudice. Those low on the economic ladder, and under threat of slipping further down, are most prejudiced (Lemyre & Smith, 1987). This effect can be demonstrated in a study on university sororities. Women who belonged to sororities that ranked relative low in status tended to be prejudiced toward higher ranked sororities (Crocker, Thompson, McGraw, & Ingerman, 1987). Attacks on self-esteem, being humiliated, also produce prejudicial reactions (Meindl & Lerner, 1984). In general, anything which diminishes the individual or produces insecurity increases prejudicial attitudes (Greenberg, Pyszczynski, Solomon, Rosenblatt, Veeder, Kirkland, & Lyon, 1990).

5.12.1 The authoritarian personality revisited
Adorno, Frenkel-Brunswik, Levinson, & Sanford (1950) discussed several authoritarian traits that explained prejudice. Personality traits predictive of prejudice included submissiveness to authority, an intolerance for anything that indicated weakness, and a punitive attitude toward those seen as outcasts of society. The insecurity of the authoritarian person leads to an exaggerated concern with status and power. Authoritarians want to solve international problems through violence, and have contempt for those seeking peaceful solutions. Many authoritarians in the American population are convinced of the need for toughness, and lend support to military adventures. They are also contemptuous of criticism of the military establishment which they see as the ultimate guarantee of security. Many authoritarians seek careers in the military or security services.

The authoritarian sees everything in absolute terms, there is a wrong way and a right way. There are black people and white people, and the two should not mix as why did God create the races? Ambiguity is not easily tolerated by authoritarians, and they favor political leaders who appear tough and decisive. Authoritarians are those who, for example, would not admit defeat in Vietnam, but argued that the proper placement of atomic bombs would have decisively ended the conflict.

Domestically the authoritarian tendencies seem to increase in times of economic difficulties and distress (Doty, Peterson, & Winter, 1990). The Nazi ideology gained adherents in Germany after the economic depression and defeat during the First World War. Other upheavals seem to confirm the underlying insecurity and hostility manifested in prejudice (Larsen, 1969; 1970). In chapter 10 we will discuss in more detail the psychology of torturers. Torturers often display submissiveness toward authority, and have contempt for their victims (Staub, 1989). In international relations, authoritarian tendencies are unleashed in chauvinistic attitudes. Chauvinism is the idea that one’s nation is better than any other nation. It is not pride in cultural achievement that motivates authoritarians, but rather a belief in the real or mythical high ranking of the nation. “God’s country” or “blessed land” are synonymous descriptions of the nation for people who gain self-esteem vicariously, and who are fundamentally motivated by insecurity.

5.12.2 Social cost, belief incongruence and race: some theoretical comparisons
Social cost is a concept which argues that prejudice derives from our desire to avoid disapproval and gain the approval of significant others in intergroup relations. Intimate relations produce the greatest potential social cost. As we discussed earlier families are likely to express strong feelings, positive or negative, when a loved one proposes marriage to someone from another ethnic or racial group. The concept differs from normative conformity (Pettigrew, 1958) in being specific in regard to who enforces the norms of a prejudiced community. How do we identify norms except through the perception of punishments or rewards administered by significant others? Esteemed religious or community leaders may also be a source of social costs when they are in contact with the person. Normative conformity has little meaning apart from this specific vehicle of enforcement that is the social cost of acceptance or rejection (Larsen, 1971).

Rokeach (1960) extended the theory on authoritarianism. Rightwing authoritarianism (Adorno et al, 1950) referred to the content of people’s beliefs thought responsible for prejudice and much destruction in the world. Rokeach argued that close-mindedness was the operative form of authoritarianism and that it could occur at any point of the political spectrum. The critical factor in dogmatism is the relative open-mindedness or close-mindedness to information. When our minds are closed, we are high in dogmatism and prejudice. Rokeach would argue that we reject others primarily because of perceived differences in beliefs or belief incongruence. Therefore what matters is not so much the content of a person’s beliefs, but the belief structure, whether the mind was open or not. If we are prejudiced toward black people, Rokeach would argue, it is because we perceive differences in values and beliefs

Unfortunately the literature is largely silent on the relative importance of various theories of prejudice. Researchers are content with establishing the validity of conceptual ideas, and not the relative importance of each. Larsen (1974; 1976; 1978) found relative support for the social cost concept. Why is belief incongruence a factor in prejudice? It could be argued that close-mindedness is a consequence of the approval-disapproval process, as it requires some motivating function. The point argued here is that people become close-minded for reasons of social costs, and the need to sharply differentiate between approved and disapproved thought. Again, why is racial categorization a factor? Social norms about race are powerful determinants precisely because they bring perceived social costs from significant others. In other words social norms are all about conforming to gain approval and avoid disapproval. Social cost may be seen as the integrating variable that explains prejudicial behavior.

5.13 Social cognition: ways of simplifying the world
As discussed previously we stereotype because doing so helps us make sense of the bewildering array of stimuli which demands attention. By developing social categories like black and white we simplify our world and reduce attentional stress. Simplifying social cognition requires that we bypass a lot of information, and focus on what is most important: people’s membership in social categories. Social categories help us to think more quickly, and bring to mind all relevant information even if much of that is distorted and inaccurate. Stereotypes help us recall quickly from memory all the relevant and salient information. Do you greet a woman the same way as a man? If not, it is because you have categorized men and women, and before interaction have brought to bear the salient stereotypes.

There are problems in social categorization. Keeping in mind our discussion of stereotypes, social categorization simplifies social reality, and in the process robs the individual of what is truly salient. Social categorization bypasses individual evaluations and makes judgment based on group stereotypes. Yet we all know that there are many individual differences within groups. Not all women are nurturant, some women take the lives of their children. Not all men are dominant, some pursue other lives of fulfillment like nursing. When we categorize people, we direct attention away from these salient individual characteristics. Stereotypes may distort social reality and produce false memories. We tend to remember traits and behaviors that are consistent with the category even if false (Lenton, Blair, & Hastie, 2001).

Nevertheless category impressions are universal and resistant to change. We attend only to individual differences if we have time, or if the categorization process is challenged. A realistic view of others would require evaluations of personal attributes, a very time consuming process (Fiske & Neuberg, 1990). It is easier to apply our value-laden stereotypes, which are readily available as they are largely emotionally based (Stangor, Sullivan, & Ford, 1991). The behavioral utility of social categorization can be easily shown (Payne, 2001). In the experiment, participants were shown black and white faces followed by objects. Participants found it easy to remember a gun when it was preceded by a black face, evidence for the presence of the “black as criminal” stereotype. In The United States, the white versus black categorization goes to extremes as anyone with even a drop of black blood belongs to the category. It is reminiscent of the Nazi categorization of Jews; anyone with minimal genetic connection was categorized as such. Nevertheless all people carry schemas of typical representatives of social categories (again consult chapter 4 on social cognition), and it is those typical facial traits that elicit stereotypes for many people (Lord, Lepper, & Mackie, 1984).

Are there evolutionary advantages which derive from group membership? If so those people who survived and passed on their genes may well have a predisposition to favor in-groups and disdain out-groups. Does evolutionary advantage explain the unconscious favoritism found in the minimal group design? Other researchers would point to the competitive nature of many human groups, particularly in the western countries. Competition produces unconscious biases toward those we share something with, even if meaningless (Mullen, Brown, & Smith, 1992, Wilder, 1981). Even when all that is shared is a mindless category, it resulted in attribution of positive personality traits to members of the in-group.

5.13.1 Out-group homogeneity
The process of simplifying the world requires us to use stereotypes, resulting in perceiving members of out-groups as more similar than they in fact are. This is called out-group homogeneity (Linville, Fischer, & Salovey, 1989). Males think that females are more alike than justified by real behavior. Perhaps you believe in the stereotype that all women want is to raise families? In fact there are important individual differences overlooked in the perception of out-group homogeneity. Some women want to have families, some want careers, others want both families and careers. However, by using the perception of out-group homogeneity we can simplify our world, and treat women as a class of people. Perception of out-group homogeneity has consequences for employment. If you believe the only purpose of women is to have children, would you hire a woman for jobs requiring expensive training, or promote women to positions of responsibility? Likewise discrimination toward other groups is justified in similar ways. If you meet a member of the out-group you can call on the appropriate schemas, and your responses will be based not on individual differences, but the stereotype. Perception of out-group homogeneity has been found in other studies (Hartstone & Augoustinos, 1995; Ostrom & Sedikides, 1992).

People believe that members of the out-group think and act alike. In studies of simple music preference at neighboring universities, participants see more similarity among students at the other university. Perception of out-group homogeneity generalizes behavior to all members of the out-group, while allowing for more diversity within the in-group (Qattrone, & Jones, 1980; Ostrom & Sedikides, 1992; Park & Judd, 1990). We meet more with members of the in-group, and therefore have more opportunity to observe differences. Lacking that person-to-person experience with members of the out-group, we form opinions based on the common stereotype.

5.13.2 Simplification of in-group similarity and perceived out-group differences
Despite having more common experiences with the in-group, some studies show that stereotypical cognition produces less variability within both the out-group and in-group. Further, we perceive greater differences between the two groups (Tajfel & Wilkes, 1963). It is difficult to build bridges between groups when the stereotypes accentuate differences, and do not allow for all that they have in common. In fact, humanity probably holds most values in common. All societies appreciate the importance of family, the search for meaning, the importance of peace, and respect for the dignity of the individual. We probably all put value on ending global warming so our species may survive, and our children have a more secure future. The hostility generated by stereotypes does not allow us to consider these common values. In all societies and cultures people have much more in common than perceived differences. All societies have a desire to survive and prosper, support families. All people face developmental tasks, and the ultimate ending of existence. These communalities provide a basis for the human discourse which stereotypical thinking interrupts or destroys.

Stereotypes help us conserve intellectual energy, which can be applied elsewhere (Macrae, Milne, & Bodenhausen, 1994). The downside is obvious. As constructs, stereotypes are over-generalizations, and inaccurate descriptions of other groups. Stereotypes may save time for the cognitively lazy person, but they produce unfair judgments of others, and lend support to discriminatory practices.

5.13.3 Stereotypes determine interpretation of interaction
Part of the resistance to change comes from the biased information processing. The individual’s behavior is seen as being typical of the group as a whole. Information about the out-group is also not evaluated fairly (Bodenhausen,1988; Kunda & Thagard, 1996). Information consistent with the stereotype is placed in memory for future interactions, facts that are inconsistent are forgotten or ignored. How the information is interpreted is influenced by the stereotype. In one study, white participants watched a heated debate between two men, one white one black. At one point, one of the participants in the debate gave the other a shove for disapproval. Half of the participants saw the black confederate giving the shove, the other half the white confederate. At various points in the discussion, the participants were asked to rate the interaction. The racial stereotype affected how the same behavior was coded. When the black member shoved, it was perceived as aggression, whereas when the white person did the shoving, it was perceived as “playing around”. In another study (Stone, Perry, & Darley, 1997) participants listened to a play-by-play account of a basketball game. Half had a picture of a white basketball player, for the other half the picture was darkened so the same person now looked black. Those who thought the player was black attributed more athleticism and thought him a better player, consistent with the stereotype of blacks in society. Those who thought the player was white, rated him as showing more energy and hustle, and as playing a smart game. Both of these studies show that biased stereotypes affect how the same information is processed.

5.13.4 Stereotypes of others affect behavior
Other people’s stereotypes may affect your behavior. In a study investigating the effectiveness of a white and black debater on nuclear energy, the participants were asked to rate the skill employed in the debate. In one experimental condition, a confederate of the experimenter made a highly racist remark about the black debater, to the effect that there was no way a “nigger” could win the debate. In two other conditions, he made either a non-racist remark, or made no remark at all. If the racist comment had no effect there should be no difference in evaluation. The results showed that the participants rated the debaters equally when a non-racist remark was made. However, the black debater was perceived lower in skill after the racist remark. These results show that we can be influenced by the comments of those around us, and the study is a strong argument for rules prohibiting prejudicial and hostile commentary. Stereotypes are easily elicited, and difficult to remove. As part of our cultural heritage they are always available and ready to use.

5.13.5 Implicit and explicit stereotypes
Devine (1989) used a distinction from cognitive psychology between automatic and controlled processing. Prejudicial attitudes may also be either explicit or implicit. Explicit attitudes exist as a result of rational awareness and conclusions. However, at times explicit racist attitudes are repressed as unacceptable to the individual or society. Attitude scales measure conscious attitudes on which the individual can reflect, i.e. explicit attitudes. Explicit measures correlate with important behaviors such as evaluations determining a black defendant’s guilt, or assessment of the adequacy of black interviewers.

Implicit attitudes on the other hand are measured (as discussed earlier) by priming the respondent’s attitudes with racial pictures, and measuring response time to stereotypically consistent and inconsistent words (Rudman & Kilianski, 2000). Implicit attitudes correlate with other involuntary responses like blinking, or to aversion of physical or eye contact (Banaji, Nosek, & Greenwald, 2005). The differences between implicit and explicit prejudice continues to be a subject of debate in social psychology (Blair, 2002).

5.13.6 Resistance to changing stereotypes
Stereotypes are heuristic shortcuts (Macrae & Bodenhausen, 2000) and prepare us for interaction with little information. They reflect broad social and cultural beliefs. Most people would not find it difficult to describe other cultural groups using stereotypical traits (Gilbert, 1951; Katz & Braly, 1933). Many of these descriptions have remained the same even after many years (Devine & Elliot, 1995). When people have no personal experience with other national groups, they find it easy to describe that group in stereotypical terms. Bulgarians have stereotypes about Gypsies and Turks; Danes about Germans and Swedes; Vietnamese about the Chinese, and in all national groups similar processes of simplistic social cognition. Do you have stereotypes about Americans? Are they favorable or unfavorable? What are some of the descriptions you would use? In the Karlins, Coffman, & Walters (1969) study, Americans were described as materialistic, ambitious, pleasure-loving, industrious, and conventional. On the other hand, American Blacks were described as musical, happy-go-lucky, lazy, pleasure-loving, and ostentatious. Which of these stereotypes has negative consequences for members of the group?

A major reason for the invariability of stereotypes is that they are descriptions of groups of people not easily disconfirmed by individual behavior. Any individual variation can be rationalized as the exception. Information in support of the stereotype is supporting evidence, and factual evidence which disconfirms is the exception that proves the rule. The frequency of crime in the black community is attributed to black culpability and dispositions to live a criminal life. Black members of the police force are seen as an exception due to fortunate family or community experiences (Kulik, 1983; Swim & Sanna, 1996).

Information intended to change people’s stereotypes often has little effect. In fact, information may be counterproductive as it elicits the counter arguing process in the prejudiced person (Kunda & Oleson, 1997). New information favoring the targeted group causes the prejudiced person to counter argue, and in his mind produce all the reasons for holding his racist beliefs and resist influence. It takes more than a few examples of the incorrectness of stereotypical views to change attitudes. The person must be bombarded with disconfirming information over a sustained period of time (Webber & Crocker, 1983). Since there are both cognitive and emotional reasons for resistance, stereotypes are difficult to change. Most prejudicial attitudes have strong emotional components which rational appeals do not address. Further, stereotyping simplifies the world, and we selectively attend to the information which confirms our beliefs.

Further support for stereotypes is found in the way we encode behavior, how we use relative abstract or concrete level of descriptions (Vallacher & Wegner, 1987). We can “help someone” across the street, or we can behave in “altruistic ways”. The level of abstraction used carries different connotations about the behavior. A black police officer “arrested” a criminal. A white officer is a member of the “thin blue line”. The more concrete we make a description, the less it says anything noteworthy about the individual. All police officers can arrest someone, but you have to be ascribed altruistic value to be part of the thin blue line that protects society.

In fact, stereotypes are almost automatic for many people. However, some people can indeed overcome prejudicial attitudes by controlling their cognition. A fleeting prejudicial thought can be suppressed as being unworthy or unrealistic. Other people, however, do not take the time to reflect on bigoted thinking. In the entrenched prejudicial person, the control processes are not activated. Bigots more or less automatically incorporate the common stereotypes without hesitation.

Devine (1989a) and Zuwerink, Monteith, Devine, & Cook (1996) developed a two-process theory of cognitive processing. The automatic processing brings the stereotypes to mind, the control process enables us to refute the distorted views. However, there is considerable variability in the use of automatic processing of negative stereotypes, we do not all process automatically to any common standard (Fazio, Jackson, Dunton, & Williams, 1995).

5.13.7 How to draw the wrong conclusions: illusionary correlations
Our cognitive processing perpetuates stereotyping through the perception of illusionary correlations. This occurs when we think two objects or variables are correlated, when in fact they are not. Some people believe that the inability to have children is caused by stress, and therefore when couples adopt and remove the stress they conceive. In fact there is no relationship between stress and pregnancy. However, at some point an adoption occurred for a couple close in time with pregnancy and stress as a cause of infertility became a common belief (Gilovich, 1991).

Illusionary correlations also promote more serious stereotypes. The idea that minorities are dangerous may be based on an illusionary correlation of Black actor’s behavior in violent television episodes. Blacks are a minority and therefore distinctive. Hence, events featuring black actors are better remembered because of their distinctiveness, even though many white actors also appear in violent programming. Many stereotypes directed toward minority groups are confirmed by illusionary correlations (Hamilton, Stroessner, & Mackie, 1993). The distinctiveness of minority representatives leads to a belief in the illusionary correlation between observed behavior on television and the behavior of the entire racial group. When people with stereotypes observe new behavior, their expectations and perceptions are guided by the illusionary correlation. If black actors appear in nonviolent programming, that is an exception or not relevant to the situation. Because of the selectiveness of perception, it is very difficult to disconfirm the illusion. We see what we want to see (Hamilton & Sherman, 1989).

When the events believed correlated are both distinctive, the illusionary correlation is strengthened (Fiedler, 1991; Hamilton & Gifford, 1976; Smith, 1991). In a recent eating contest, a skinny young woman won hands down. Eating contest is novel in society, and we do not expect skinny women to win these events. The event and the skinny woman winning are both distinctive, and could form the basis of a new illusionary correlation. Skinny women as champion eaters! However, the stereotypes of big fat men being heavy eaters probably outweigh such distinctiveness.

Salient people are perceived as the cause of whatever is occurring (Taylor & Fiske, 1978). Distinctiveness brings attention and creates illusions of differences that do not exist. We use distinctive cases as a heuristic rule in judging members of minority groups. A black person in an all white group is distinctive, and we may see outcomes in the group as due to his behavior. If the group is frustrated, we may be tempted to think this is due to the hostile behavior of the minority person, an illusionary correlation. We see a black person driving a Cadillac and come to the conclusion that they do not care about housing if they are poor. Alternatively, the Cadillac as a status symbol may lead to the illusionary correlation that all black men have gotten rich by ill-gotten means. One or two similar cases are sufficient to form an illusionary correlation.

The mass media reinforce illusionary correlations. A couple of years ago a mentally ill patient killed his psychiatrist in Oslo. There was subsequently much debate on the potential danger to society from the mentally ill. This singular event formed the basis of an illusionary correlation. In actual fact, there is little danger from psychiatric patients, only few pose a danger to themselves or society. Stereotyping encourages people to see correlations where there are none, (McArthur & Friedman, 1980).

6. Modern racism: the fundamental and ultimate attribution errors
The fundamental attribution error occurs when we attribute behavior predominantly to inner dispositions, disregarding significant situational determinants. According to Pettigrew (1979, 1980), this becomes the ultimate attribution error when we explain behavior of groups. The in-group is given the benefit of the doubt, and we think the worst when it comes to the out-group.

Since society changes racist ideology takes on new forms. To prove blacks are inferior to whites serves important ideological functions. Genetic racial inferiority is a strong argument against integration, since the average intelligence of a nation would decrease from integration of racial inferior and superior groups. The debate of the relative intelligence of racial groups has a long history. The most recent contribution to the debate is the book by Hernstein and Murray (1994). In a review of research on intelligence, they presented evidence of statistically significant differences in academic performance between blacks and whites. These differences, the authors concluded, derive from genetic components. Learning can therefore modify performance only within these genetic parameters.

Besides these tests “proving” that whites perform better than blacks, other tests showed that Asian Americans perform better than whites. The important question is why these differences occur? Should we attribute these differences to genetic components as Hernstein and Murray would argue? That argument would be in conformity with racist ideology that poor performance be attributed to dispositional causes, to some inadequacy within the group targeted.

However, the differences can also be attributed to situational causes. Nowhere in the United States do blacks or whites have comparable social environments. Blacks typically suffer from inferior social support, from poverty, inferior school systems, inadequate nutrition, and many other discriminatory factors that also explains racial differences. Since it is not possible to separate the genetic from the environmental component, the decision favoring situational or dispositional factors becomes a choice of ideology.

Racism impacts the self-concept and creates insecurity. Under conditions of evaluation, blacks feel apprehensive, debilitating self-esteem and lowering performance. Blacks are well aware of the common stereotype about inferior academic performance, and feel “stereotype threat” from the expectations (Aronson, Quinn, & Spencer, 1998; Steele & Aronson, 1995). The apprehension centers on feelings that the black respondent will confirm the existing stereotype of intellectual inferiority. In the above experiment, whites and blacks performed equally well when blacks did not believe they were being evaluated (when they thought the exam was for the purpose of improving the test itself). However, blacks did poorly when they believed the test evaluated individual performance. Most of you have experienced test anxiety, and know how it inhibits thinking and performance.

Similar stereotype threats are found for gender (Spencer, Steele, & Quinn, 1999). When women thought the purpose of the test was to demonstrate differences between males and females, stereotypic threat created poor performance (see also the discussion earlier in this chapter). However, when women believed that the test was not designed to show gender differences, they did as well as men on the math test. Stereotype threat affects the performance of the targeted group. Remember stereotypic threat consequences are found also in white males when they believe they are competing with Asian males in math (Aronson, Lustina, Good, Keough, Steele, & Brown, 1999). A common stereotype in the US supports the superiority of Asian males in mathematics.

We have a choice whether we attribute these differences to dispositional causes, e.g., the inferiority of women and white males in mathematics, or situational causes, i.e., different social environments and opportunities. We have a choice whether to believe in a dispositional cause, the genetic inferiority, or a situational cause, the inferior environment. The attributional conclusions drawn have important implications for social policy. If the dispositional cause is promoted, the resulting policy supports segregation, and blames the victim. If attribution is made to situational causes, the policy required is improvement of the social environment.

Nevertheless, there is a strong tendency to blame the victim for any shortcomings (Lerner, 1991). By attributing poor performance to the victim, we can rationalize what otherwise would be an unjust world (Furnham & Gunter, 1984). Beliefs in a just world require an attribution of blame to the victim. Blacks are personally responsible for misfortune. Dispositional attribution would argue that the rape victim’s seductive behavior brought on the rape (Wagstaff, 1982).

6.1 A just world or racist ideology: The ultimate attribution error
The fundamental attribution error occurs then when we attribute significant social behavior to personal dispositions, and devalue the situational forces that may be responsible. The situational context of black behavior in America is slavery and the institutions that supported segregation and discrimination. Pettigrew (1979; 1980) suggested that this attributional bias could be defined in racial relations as the “ultimate attribution error”. When we understand individual behavior within the context of group stereotypes, we commit the ultimate error, and we expect the worst from targeted groups. If a black person is intelligent and performs at high levels, we dismiss this as a special case. Intelligent behavior could even be used against minority people as we found in our conversations with some whites in Australia. Intelligent Aborigines were perceived to be those of mixed race, and were also considered the most dangerous, according to this racist view.

The persistence of racist perspectives derives from ideological beliefs in a just world. Many people subscribe to the idea that we live in a fundamentally just world, and misfortune is a consequence of our own behavior (Lerner & Miller, 1978; Lerner, 1980). Becoming a victim, produces a negative evaluation, as we saw in the studies of attitudes toward rape victims (Carli, & Leonard, 1989). Is the victim ultimately responsible? Just world ideology is closely tied in to beliefs in individualism, and may be more dominant in western societies. Believing that the world is just, explains much of the opposition to social welfare, or national medical care. If you are poor or ill, this misfortune comes from bad choices you made in the past, and you are individually responsible.

The just world concept is related to social dominance theory. Those who are dominant can think of their fortune as an entitlement from a just God. Those who are unfortunate do not deserve sympathy, as they are responsible for their own lives. The just world concept applauds the winners of life, and denigrates the losers. Sick people are responsible for their illness (Gruman & Sloan, 1983); and rape victims should have appeared less seductively (Borgida & Brekke, 1985). The just world concept supports many stereotypes and much discriminatory behavior. Social inertia is an ideological consequence since ultimately misfortune is not the responsibility of society of the community. What are we to do?

7. The reduction of prejudice in society
As we have seen, prejudice affects millions of lives all over the world. What is to be done? Does prejudice derive from ignorance? Many people are prejudiced without having any personal experiences with the target group. Perhaps ignorance can be reduced by education? Education may provide facts that help us see other people in a better light. Yet, we have seen that many stereotypes are sustained because they satisfy emotional needs and factual information would change few minds because of the selective information processing of the prejudiced person. Facts that support the stereotype are retained whereas the information that is disconfirming is discarded. Would more contact be helpful?

7.1 The right type of contact can lead to reduction of prejudice
Perhaps we need more personal contact with minorities. The 1954 Supreme Court decision, which outlawed school segregation in the US, was seen by many as the beginning of the end of prejudice. There were good reasons to feel that way. Deutsch and Collins (1951) had studied attitudes among whites who lived in segregated and integrated housing. They found that housing integration led not only to more contact between the races, but also to more positive attitudes among whites. However, the research that followed (Stephan, 1978; 1985) did not lend support to the idea that contact led to a decrease in prejudice.

The self-esteem of black children also did not improve after desegregation. In a majority of the studies, prejudice actually increased following desegregation. Increase in contact did not produce better interracial relations or an improvement in the self-concept. Formal desegregation did not result in real integration as de facto segregation continued. In the integrated armed services, soldiers continued being segregated in friendship patterns, in schools children ate lunch in separate corners, and played primarily with same race companions (Aronson & Thibodeau, 1992; Schofield, 1986).

Clearly contact did nothing to improve attitudes in these studies so does contact have any effect? Some would maintain that contact at least reduces the most bizarre stereotypes (Pettigrew & Tropp, 2003). However, it is not contact that matters, but the type of contact. Historically, in the South of the US, there was lots of contact between blacks and whites, but under conditions of inequality. Inequality served to confirm existing biases, as a result of both selective treatment and information processing. What mattered then was the type of contact (Allport, 1954). In his pioneering work, Allport noted the importance of equal status during the contact, the perception of common goals, that contact received institutional support, and led to the perception of common interests.

Sherif, Harvey, White, Hood, & Sherif (1961) came to similar conclusions. Hostility was reduced when the boys studied at camp, perceived common goals, and developed feelings of interdependence. In the housing study (Deutsch & Collins, 1951) the racial groups had equal status, and stereotypes were therefore confronted. The importance of friendly interaction has also been emphasized (Wilder, 1986). Formally desegregating interaction between groups does little to promote friendly feelings essential to the development of empathy. Also, contact should be with many representatives to avoid the “exception to the rule” rationalization. Multiple contacts are necessary to encourage the disconfirmation of stereotypes. Since conformity plays so large a role in prejudicial behavior, it is also essential to change the social norms. Creating high quality contact may result in new social norms which lend support to equal treatment and valuations (Amir, 1969; Gaertner, Dovido, Rust, Nier, Banker, & Ward, 1999). High quality contacts are personal and allow for friendship (Cook, 1978). Prejudice is reduced when contact is frequent enough, and has a personal quality that promotes empathy.

In today’s USA blacks and whites continue to live in segregation. Despite laws that favor integration, the large majority continues to live in segregated neighborhoods (Fasenfest, Boozy, & Metzger, 2004). Real segregation continues as there is little friendship between the races (Jackman & Crane, 1986). In Europe those who have interracial friendships tend to be the less prejudiced (Pettigrew, 1997), which supports the importance of high quality contact. These results underline also the problem. Those who are prejudiced simply avoid interaction, and display anxiety about interracial contact (Plant & Devine, 2003), whereas the non-prejudiced seek (intimate) contact.

At the end of the day, is there to be a common destiny? In the Sherif study, the boys cooperated on a number of tasks that subsequently changed their attitudes. These tasks were called “super ordinate goals” by Sherif, goals held in common by all which transcended any group differences. There is no shortage of super ordinate goals in the world. Controlling global warming is a super ordinate goal which must be reached through the cooperation of all parties, and is essential to the survival of civilization. Nuclear disarmament is another super ordinate goal. Today so many years after the cold war, the superpowers are still heavily armed and can destroy the entire world within 15 minutes. Everywhere in the world we face religious and ethnic divisions and conflict. The blood bath that is Iraq reminds us of what happens when the same national group decides that their ethnic subcategory is more important than the overall national welfare. We need to view society with more inclusive categories (Dovidio, Gaertner, & Validzic, 1998b) and strengthen the perception that we are all part of humanity.

Societies must be created that meet the needs of all citizens. A cooperative world contributes to feelings of common destiny and the reduction of prejudice. Increasing national income and wellbeing would reduce the competitive cause for prejudice. Competitive societies can best be described as those playing a non-zero sum game. However in competitive societies, what one person or group gains is at the expense of other individuals or groups. Can we develop a vision for more cooperative societies?

8. The jigsaw puzzle method in the classroom: An experiment in cooperation
The initial efforts at desegregating classrooms did not bring the desired improvement in self-esteem or racial cooperation (Stephan, 1978). Aronson (1978) did an experiment in cooperation with Texas school children. He pursued classroom integration through a new effort at student cooperation called the jigsaw puzzle classroom. The class was divided into six person units. Each group was assigned a learning task based on assigned reading material, and each member of the group had to learn one sixth of the material. The individual student possessed a fraction of the material which all the students needed to learn. Each participant then had to teach the other five students their segments so all the material could be put together like a jigsaw puzzle. In traditional classroom settings, students compete for grades and attention. The competition supports the idea that other students are competitors, not resources. By contrast, the jig saw puzzle method made the students interdependent. Even the weakest student had an important role, because the other students needed him to get the complete picture. Encouragement to transmit learning was provided in jigsaw classes, as otherwise important information would be excluded. In contrast to the competitive class rooms, in the jig saw classes it was in everyone’s interest to perform at high levels.

A great deal of research has now been completed on the jigsaw classroom. The results strongly favor the method over the competitive classroom (Aronson, Blaney, Stephan, Sikes, & Snapp, 1978; Aronson & Gonzales, 1988; Walker & Crogan, 1998; Wolfe & Spencer, 1996). Students in the jig saw classes demonstrated less prejudice, and developed more cross ethnic liking relationships. The children also demonstrated improved self-esteem. Cross ethnic groups spent more time together out of the class room and with enough quality contact to truly change stereotypic views. Improved relations are produced by removing in-group-out-group distinctions (Gaertner, Mann, Dovido, & Murrell, 1990). In the process, students developed more empathy. It is a wonder that this method has not been more broadly applied, as it could be used in a variety of arenas where competitive or hostile categories prevent empathy and effective communication.

Summary
Prejudice is common to and prevalent in all modern societies. Prejudice is an attitude with three common components. The affective component is called prejudice, the cognitive component stereotype, and discrimination refers to behavioral consequences. In the literature, the term prejudice is an umbrella term used for all three components. In the US, prejudice toward blacks derives from our history of slavery and the Jim Crow laws which followed that supported racial segregation. Our common history targeted all ethnic groups as can be seen in the many pejorative terms available to the bigot. Intergroup enmity is persuasive and it is a part of the human condition. However, prejudice is learned and can be unlearned.

Victims of bigotry suffer many harmful effects. Stereotypes produce self-fulfilling prophecies, when the victim behaves in accordance with social expectations. When stereotypes are made salient to minority group members, it causes stereotype threat and lower performance on a variety of tasks. Stereotypes unfairly limit expectations since they ignore the overlap in behavior between groups and individual differences within groups. Stereotypes also support the evaluation of performance, and eventual success. When rapid responses are required, stereotypes can be deadly for targeted people. Reaction time in video games and in real life shows that people depend on simple heuristics in making life or death decisions. The reaction time in stereotypic consistent situations is short. For example in a situation in which blacks are perceived as threatening.

Stereotypes which sustain prejudice are often based on ancestral myths or religious enmity. There is a grain of truth in all stereotypes. There is more crime in black communities, but not all blacks are criminals. Females are more nurturant, but some mothers kill their children. Socialization determines the form of stereotypes in all societies, they are vast over-generalizations, and do not evaluate the historical conditions creating behavior. Discrimination occurs because society allows it or is indifferent. Stereotypes support discrimination, a discrimination that proceeds from ethnocentrism. People tend to give the in-group the benefit of any doubt, and consistently show in-group bias.

The history of the world is one of intergroup hostility and discrimination. The treatment of the Japanese Americans in the US during World War II, and the persecution of political progressives, labeled communists, during McCarthyism, are examples of societal prejudice. Members of in-groups are rated favorably in employment, and indeed in all walks of life. It is a challenge for social psychologists to understand why intergroup enmity is so prevalent and decisive in human interaction.

Changing norms often create ambiguity. The targeted person is unsure if prejudice, or personal inadequacy is responsible for misfortune. We have experienced significant changes in racial and gender norms over the past decades. Black people recognized that stereotypes negatively impacted self-esteem. The “black is beautiful” movement arose in direct response to assaults on the self-concept of black children. Gender stereotypes have gone through a similar transformation. In the past, both genders accepted gender-limiting stereotypes. However, in the modern woman, self-depreciation has largely faded. In intimate relations, there is a reserve of prejudice, when the social costs are very high.

Blatant prejudice is fading in modern society, but subtle biases remain. Prejudiced people are conforming to new norms of racial equality. The bigoted person still exists but may no longer tell the truth about his attitudes, his racism has taken on a different form. Modern forms of racism are expressed in opposition to busing as a means to integrate schools. Much opposition is also expressed against affirmative action. This opposition is derived from individual rights and community values. Egalitarian values are used to maintain the status quo and resist integration. A victim’s behavior is attributed dispositionally, and the victim is perceived as personally responsible for his misfortune. In refusing to consider the situational factors affecting behavior, the bigot can uphold, in his own mind, belief in equality of treatment. The focus of concern becomes the “equal” treatment of the majority. Underlying support for “egalitarian behavior” is a reserve of ill will.

Flagrant racism is also fading in Europe, but indifference toward victims is also a form of racism. Modern racism promotes an ideology of merit and colorblind judgment, although this concern for equality is merely an excuse for indifference toward victims and racial inequality. The bogus pipeline and the Implicit Association Test uncover prejudice even among those who deny it to themselves.

Prejudice is complex behavior. It is learned, and therefore relies on the basic methods of learning: classical conditioning, reinforcement, and social learning. Early learning is of particular importance, by age seven the child understands discriminatory community norms. Once learned, stereotypes are difficult to change. The media plays a role in the learning of stereotypes by how it portrays minorities and women. Often the depiction is unflattering or menial. At times there are no role models for members of the minority.

As mentioned before, in modern racism social inequality is a precursor to prejudice in times of rising expectations. Intergroup conflict is caused by inequality in consumption. Social inequality is used as a justification of prejudice. Inequality is presented as a desirable condition for the oppressed. Colonizers saw themselves as carrying the “white man’s burden”, and believed that they provided “civilization”. Once discrimination has occurred, it is easy to justify it by stereotypes and pejorative terms. Another example is dogmatic religion which is exploited to preserve the status quo of inequality, explained as a consequence of God’s will or fate. Realistic group conflict also occurs. The economically advantaged justify the status quo by prejudice toward the disadvantaged. The greater the economic and status differences the higher the prejudice.

Scapegoating theory explains why hostility is directed toward substitute targets such as the disadvantaged rather than the real source of frustration. Often the source of the frustration is not easily identified, at other times it is too powerful. The aggression is displaced toward those who cannot respond and have little power. In the Robbers Cave study Sherif demonstrated how competition elicited hostile behavior. That classical study also showed how to overcome prejudice through super ordinate goals.

Research on group categorization has identified predictable in-group versus out-group distinctions. Groups serve functions of both survival and identity, the basis for in-group bias. The minimal group design experiments demonstrate convincingly that even trivial group membership produces significant in-group bias. Although in-group bias has been demonstrated in varying national samples, it is less prevalent in interdependent cultures. When strong attachments are felt for groups central to our values, other groups are perceived as threatening. We gain great vicarious satisfaction from reference groups which is why people identify with winning sports teams.

Social dominance theory describes society as a hierarchy of winners and losers. The tranquility of a social system is maintained by the dominant political apparatus. All dominant groups, races, or nationalities want to maintain the benefits of their position, and do not willingly yield power. Prejudice derives from the perceived threat that equality creates in a zero-sum world where the gain of one group is someone else’s loss.

People have abiding desires to be accepted by reference groups and significant others. Conformity and bigotry go hand in hand in societies where prejudicial norms are present. Prejudice is motivated by the desire to get along, and gain acceptance by valued reference groups. Traditionally, the southern parts of the US had the most prejudicial norms. However, when the norms which sustained blatant prejudice changed, so did the bigots. Blatant prejudice gave way to new norm’s which allowed for more subtle forms racism or sexism.

Institutions support prejudicial norms. Social institutions keep the targeted groups segregated or in defined menial status positions. Blacks were historically segregated in schools, in public transportation, and in public venues. They could not even get a drink of water from the same water fountain as whites. When the structure of segregation was dismantled, this was the great victory of the civil rights movement. Still today, however norms prevent fair treatment of women and minorities. Norms may be an unspoken consensus about the aptitudes and abilities of females and minority groups. Although some new norms favor integration, many problems remain in the stereotypic descriptions in the media, and the lack of appropriate role models.

Personality dynamics explain some prejudice. Through differential childrearing some people develop insecure personalities expressed in search for status and the formation of authoritarian traits. Insecure persons have a need to rank higher than others on socially valued dimensions to support their self-esteem. Typically the authoritarian person possesses punitive attitudes toward the outcasts of society. In times of social upheavals, authoritarian tendencies increase as insecurity underlies authoritarian beliefs and practices.

Social cost is an integrating concept which explains prejudice as a function of a desire to be accepted and not rejected by significant others. It is a more specific concept than normative conformity, as it explains the mechanism by which prejudice is enforced. Intimate relations have the potentially highest levels of social costs, which is probably why white parents still do not endorse interracial marriages. It is in intimate relationships that prejudice exacts the highest price in rejection by those most significant, parents and other important people. While the literature is largely silent on the relative importance of various theories of prejudice, some studies point toward social cost as an integrating concept.

The topic of social cognition and prejudice cover several important concepts. The basic idea is that people become prejudiced as a result of trying to simplify the world. It is easier to stereotype and have prepared positions about the characteristics of people. Prejudice is a consequence of simplistic thinking and relying on heuristics in recovering important information from memory. At the same time, stereotypes rob the individual of salient properties and dismiss individuality in groups.

Members of out-groups are perceived as similar, and variability in traits and abilities are disregarded. There is also evidence that stereotypic categorization also works to create more perceived similarity within the group. These heuristic shortcuts are consistent over time, and conserve intellectual energy. Stereotypes are very resistant to change. Rational appeals to reconsider stereotypic information create counterarguments and have little weight as stereotypes are largely based on emotions. Bigots accept information consistent with the stereotype, and reject inconsistent information. Biased information processing also determines interpretation of interaction. The very same event is interpreted differently depending on the stereotype. Even stereotypes of other people can affect our behavior; witness the devaluation of someone just sitting next to an obese person.

Some researchers make a distinction between explicit and implicit attitudes. Attitude scales measure explicit prejudice of which the person is aware and can self-report. In times of changing norms, the bigot may be afraid to report truthfully. Implicit measures utilize priming methods with stimulus pictures and recorded reaction time to lay bare the stereotypic consistent and inconsistent words.

Stereotypes are so resistant to change that only high quality contact and relationships are effective. The bigoted person needs to be bombarded with many examples of inconsistent information over long periods of time. Some stereotypes become automatic, and stimulate little reflection. Still some people do control their thinking when they observe contradictions between the stereotypic response and their values. Stereotypic thinking is aided by illusionary correlations when we think variables are correlated that in fact they are not. The relationship between red hair and hot temper is a common illusionary correlation. Red hair is uncommon and distinct people or events lead to these illusions.

Modern racism is based on fundamental and ultimate attribution errors. The in-group is given the benefit of the doubt, and dispositional causes are attributed to the out-group. The accumulated consequence of modern racism is stereotypic threat where members of the minority fear they will confirm the stereotype. All groups experience stereotypic fear when perceiving a competitive disadvantage during some scrutiny or examination.

How can we reduce prejudice? Some believe that more education and contact will reduce prejudice, but education is not very helpful because of the selective information processing. Research shows that only the right type of contact is helpful. Contacts leading to perception of communality as found in super ordinate goals create feelings of common destiny. A cooperative world meets the needs of its people, and will remove many sources of prejudice. The jig saw puzzle method of learning points the way toward improved intergroup relations.