ISSA Proceedings 2006 – Normatively Responsible Advocacy: Some Provocations From Persuasion Effects Research
This paper addresses one aspect of the relationship between argumentation studies and social-scientific persuasion effects research. Persuasion effects research aims at understanding how and why persuasive messages have the effects they do; that is, persuasion effects research has descriptive and explanatory aims. Argumentation studies, on the other hand, is at its base animated by normative concerns; the broad aim is to articulate conceptions of normatively desirable argumentative practice, both in the abstract and in application to particular instances, with a corresponding pedagogical aim of improving discourse practices. That is, one of these enterprises is dominated by descriptive and explanatory concerns and the other by normative interests.
In some previous work I have explored the relationship between these two undertakings by taking up the question of whether there is any intrinsic conflict between normatively-sound argumentation practices and practical persuasive success. The empirical evidence appears to indicate that a number of normatively-desirable advocacy practices – including clearly articulating one’s overall standpoint (O’Keefe, 2002), spelling out one’s supporting evidence and arguments (O’Keefe, 1998), and refuting counterarguments (O’Keefe, 1999) – commonly improve one’s chances for persuasive success.
This paper approaches the relationship of normative argumentation studies and descriptive persuasion effects research from a different angle, by pointing to several empirical findings that raise questions or puzzles about normatively-proper argumentative conduct. My purpose here is less to offer definitive conclusions about normative analyses of advocacy, and more to point to some social-scientific research findings that indicate some complications in the analysis of normatively desirable argumentative conduct – including some ways in which practical persuasive success may not be entirely compatible with normatively-desirable advocacy practices.
1. Background
As a preliminary, it may be useful to notice that at least some of what I have to say will intersect with some of the concerns of pragma-dialectics. Van Eemeren and Houtlosser have in recent years taken up questions concerning the nature of “strategic maneuvering” and its analysis from a pragma-dialectical standpoint. “Strategic maneuvering” refers to the advocates’ “attempt to make use of the opportunities available in the dialectical situation for steering the discourse rhetorically in the direction that serves their own interests best” (van Eemeren & Houtlosser, 2001). One of the questions van Eemeren and Houtlosser have addressed is specifically the question of when strategic maneuvering is normatively questionable (as opposed to normatively unobjectionable). At least some my discussion will be seen to address that same question.
However, a complexity is introduced by the natural divergence between (a) the circumstance contemplated by (pragma-dialectical and other) ideals for critical discussion and (b) the circumstance in which argumentation and advocacy often are undertaken. Ideals for critical discussion often seem to contemplate a situation in which (at a minimum) two advocates undertake the articulation and defense of different points of view. There may be some third party to which the advocates’ arguments are addressed (as in legal proceedings), or each advocate may act as the other’s audience, but the key feature to which I want to draw attention is that there are two advocates.
But advocacy sometimes occurs in circumstances in which only one advocate is heard, such as consumer advertising. Yes, one may here think of the audience as (implicitly) the other advocate, but one would immediately want to acknowledge that the audience may not always be in the same sort of argumentative position as the advocate (for instance, the audience may not know as much about the relevant subject matter as does the advocate). And, yes, sometimes opposing views are available elsewhere; for instance, in the case of consumer advertising, consumer advocacy groups may publish opposing views or critical information. Even so, especially in instances of advocacy (such as commercial advertising) delivered through traditional media of mass communication, there is some asymmetry between the audience and advocate.
Moreover, there are circumstances in which there is (potentially) argumentation (in a broad sense) but not necessarily advocacy (in the usual sense). The kind of circumstance I have in mind is exemplified by those medical decision-making situations in which a patient is to choose among alternative courses of action. In such situations, health professionals can provide arguments and evidence that bear on that decision, even if they advocate no particular option.
So my interest here is broadly with any situation in which persons consider some potentially-argument-based claim, that is, some claim that might be supported by argument. I mention these contextual variations and divergences (between the circumstances of critical discussion and other circumstances) because I think that they bear on the task of transferring normative ideals from one circumstance to another – and because they foreshadow some of the complications to which I want to point.
2. Some empirical provocations
I now want to turn to a number of research findings in the social-scientific literature relevant to persuasion that seem to me to raise some questions about normatively-proper advocacy. I offer four examples, each considered individually, but I hope also to draw out some connections among these.
2.1 Gain-loss message framing
One much-studied message variation in persuasion effects research is (what is called) the contrast between “gain-framed” and “loss-framed” appeals. A gain-framed appeal emphasizes the advantages of compliance with the communicator’s viewpoint; a loss-framed appeal emphasizes the disadvantages of noncompliance. So, for instance, “If you take your hypertension medication, you’ll probably get to play with your grandchildren” is a gain-framed appeal; “if you don’t take your hypertension medication, you might not get to play with your grandchildren” is a loss-framed appeal. The underlying substantive consideration (offered as a basis for acceptance of the advocated view) is the same in the two appeals; what varies is how that consideration is “framed” (for some reviews and discussion, see O’Keefe & Jensen, 2006; Rothman & Salovey, 1997; Salovey, Schneider, & Apanovitch, 2002).
A parenthetical remark here: Although it’s easy to gloss gain- and loss-framed appeals as involving substantively identical arguments, in fact the two framings are not necessarily logically equivalent. Each appeal’s central claim takes the form of a conditional. For the loss-framed appeal, the conditional is either “if not-A, then U” (if the recommended action A is not undertaken, then some undesirable consequence U results; “if you don’t wear sunscreen, you may get skin cancer”) or “if not-A, then not-D” (if the recommended action A is not undertaken, then some desirable consequence D is failed to be obtained; “if you don’t wear sunscreen, you may not have healthy skin when you’re older”). For the gain-framed appeal, the conditional is either “if A, then not-U” (if the recommended action A is undertaken, then some undesirable consequence U is avoided; “if you wear sunscreen, you can avoid skin cancer”) or “if A, then D” (if the recommended action A is undertaken, then some desirable consequence D is obtained; “if you wear sunscreen, you can have healthy skin when you’re older”). As will be noticed, the loss-framed conditionals are not identical to their gain-framed counterparts. For instance, the conditional “if not-A, then U” is not identical to “if A, then not-U.” After all, it could be true both that “if not-A, then U” and that “if A, then U”; indeed, people do sometimes appear to reason in such a fashion (“I’m going to get cancer no matter what I do”). However, it is probably unwise to assume that the difference between these two conditionals is readily apparent to casual observers. Moreover, this way of reconstructing gain – and loss – framed appeals (the way I’ve just formulated them) is not unimpeachable. For instance, although each appeal is a conditional, the consequence might be expressed differently, namely, as a changed probability of obtaining some outcome: “If you wear sunscreen, you decrease your chance of getting skin cancer” and “If you don’t wear sunscreen, you increase your chance of getting skin cancer.” And this alternative way of expressing the appeals makes them look substantively rather more similar. So, without overlooking the possibility that the two ways of expressing an appeal are not necessarily logically equivalent, we surely can say that the two ways of expressing an appeal involve the same underlying substantive consideration.
These gain-loss framing variations can be seen to involve the use of what van Eemeren and Houtlosser have called a “presentational device,” “the phrasing of moves in light of their discursive and stylistic effectiveness” (2001, p. 152; see also van Eemeren & Houtlosser, 2000, 2005). As van Eemeren and Houtlosser (2005, p. 32) indicate, “certain instances of strategic maneuvering” can be “dialectically sound” (normatively unobjectionable) while others are “fallacious” (normatively dubious) The project they take up is that of “developing criteria” for identifying sound and fallacious maneuvering.
I don’t want to be detained here by the specific question of whether van Eemeren and Houtlosser’s particular criteria would classify this as a “sound” or “fallacious” presentational device – in good measure because their criteria are not yet entirely well-specified and in any case application of any such criteria is acknowledged to involve “context-bound judgments of specific instances of situated argumentative acting” (van Eemeren & Houtlosser, 2005, p. 32). But I do want to rely on our common intuitions here about what makes for normatively responsible (or questionable) advocacy.
So the question is whether we are indifferent (normatively speaking) to whether an appeal is phrased as a gain or as a loss. And my sense is that there is not much ground for concluding that an advocate’s choice of a gain- or loss-framed appeal has normative implications. After all, this seems purely a presentational device: the underlying substance of the argument is the same in the two cases, which makes it difficult to see how the use of one or another framing could generally be fallacious (normatively dubious).
And I think this normative indifference is unaffected by learning that the two ways of framing the arguments are not always identical in their persuasive effects. For example, it seems to be the case that for messages advocating breast-cancer detection behaviors (such as mammography and breast self-examination), loss-framed appeals are generally more persuasive than gain-framed appeals (this generalization I offer tentatively, based on yet-unpublished work with Jakob Jensen). But this just seems to be an instance in which a presentational device is chosen for its persuasive effectiveness, without any normative hackles being raised. After all, it’s the same underlying argument.
2.2 Success rate vs. failure rate
But now consider a second (related, but distinct) example: The acceptability of a medical treatment or surgical procedure (e.g., the likelihood that patients will choose it) can be influenced by whether the outcomes are expressed in terms of the treatment’s success rate or its failure rate. For example, a surgical procedure is evaluated more positively when it is described as having a 90% survival rate than when it is described as having a 10% mortality rate (for some reviews, see McGettigan, Sly, O’Connell, Hill, & Henry, 1999; Moxey, O’Connell, McGettigan, & Henry, 2003).
This is quite similar to the first example. The two formulations (success rate and failure rate) are based on the same information – the same substantive consideration – but they present that information differently. Given that similarity, one might naturally suppose that we would similarly be normatively indifferent to the presentational form.
And yet surely we are not normatively indifferent here. I think the common intuition would be that there is something wrong with knowingly and purposefully choosing one or another formulation. These varying expressions (success-failure treatment descriptions) do represent a “presentational device” like gain-loss message framing, but somehow this second case seems to present something a little different from the first.
Part of the difference is unquestionably the implied setting, namely, a circumstance in which a health care professional is describing a treatment option to a patient. Here, we might think, the health care professional has an obligation to present the information in as transparent and unbiased a way as possible – and so, for instance, we might think it would be normatively most appropriate to express the information both ways. But this seems a little too easy an answer, for three reasons.
First, there is no guarantee that expressing the information both ways will somehow neutralize the effects of a given expression. For instance, it might be that once patients have been exposed to the failure-rate information, it will not matter if they also have the success-rate formulation (there’s not much empirical evidence concerning the effects of presenting both forms). That is, it’s not clear that there’s a normatively easy solution here.
Second, implicit in the idea that there is something normatively wrong about knowingly choosing one of these presentation formats may be the suggestion that it is somehow improper for the health care professional to have any advocacy role. Of course, there’s nothing wrong with the professional’s having a viewpoint (e.g., about whether the patient should undergo the procedure). The question is whether the professional ought to express that viewpoint, as opposed to being a disinterested adviser. The boundaries between these roles is blurry, and different patients might well have different preferences about the professional’s role. But it is easy to imagine that at least sometimes, it will be entirely appropriate for the health care professional to advocate a particular course of action – and in such a circumstance it would be misguided to complain that, by virtue of choosing one presentation format, the professional wasn’t being an unbiased adviser. That is to say, if there’s something normatively questionable about the choice of presentation format, it must be something other than that the knowing choice of format disqualifies the health care professional as an unbiased adviser (that is, something other than the practice’s putative incompatibility with an unbiased-adviser role).
Third, surely we don’t want to say that it’s permissible to selectively choose a presentation format as long as one is in an advocacy role but not when one is in an information-provider role; presumably we want even advocates to be normatively responsible. If the presentation format itself inappropriately influences outcomes, then all invocations of that format ought to be subjected to the same normative sanction, regardless of the communicator’s role as an advocate or an adviser. If it’s normatively irresponsible to choose one presentation format when one’s in an disinterested information-provider role, surely the presumption ought to be that it should be equally irresponsible for interested advocates to do so.
That is to say, even putting aside considerations of the communicator’s role in this setting, there look to be normative questions that arise from the use of this variation. And that, in turn, suggests that we might usefully revisit the previous example concerning gain-loss message framing. I earlier suggested that the use of gain-framed or loss-framed appeals raised no normative concerns, but, given this second example, that conclusion ought to be reconsidered.
2.3 Gain-loss message framing reconsidered
Think about gain-loss message framing this way: Persons exposed to a loss-framed appeal will (sometimes) make different choices than if they had been exposed to a gain-framed appeal. And, of course, it’s in the nature of things that this influence (of appeal framing) will be invisible to people – they will be unaware that their choices have been influenced by the particular way in which the appeal was framed. They will not know that if they had been exposed to a differently-framed appeal, they might have made different choices.
This way of putting things makes appeal framing look rather like a fallacy, at least in some traditional ways of thinking about fallacies. A long-standing characteristic worry about fallacies is that they lead an unsuspecting audience to be influenced in ways it otherwise would not have been. And here we might have a similar concern: Audiences will be influenced in ways they otherwise would not have been – not because of the substance of the appeals, but because of the phrasing of the appeals. (It’s important here that these examples involve variations in expressing the same underlying substantive consideration. Differential effects because of differentially meritorious arguments are no grounds for worries about normative misconduct.)
Indeed, this line of thinking makes one wonder whether it is possible for any presentational device – or at least any presentational device that makes a difference to persuasiveness – to be dialectically sound, that is, non-fallacious (not normatively questionable). If one way of expressing an argument has effects on people’s decisions that are different from the effects associated with some other way of expressing that argument, then the argument qua argument is presumably not getting its due. (Do notice that this way of formulating the problem relies on knowing the dancer from the dance – the argument from its expression. And while it may be useful for some purposes to separate the argument per se from its particular realization, that distinction ought not be presumed secure.)
These first two examples can be thought of as representing presentation devices that (potentially) exploit human psychological weaknesses. We might wish that it wouldn’t matter whether outcomes were expressed as “90% survival” or “10% mortality,” but it does – and an advocate can exploit that fact in the service of the advocate’s persuasive aims.
And one might argue that audiences should be protected from their weaknesses in this regard. Extensive empirical evidence has pointed to various systematic biases in reasoning, such as “optimism bias” (in which people are unrealistically optimistic about, for example, their relative susceptibility to health risks). [Some time ago, Finocchiaro (1992) recommended closer attention to similar phenomena by argumentation scholars.] And there is now considerable discussion of the legal implications of these sorts of phenomena – such as questions of whether government action (e.g., through restrictions on advertising) are appropriate or useful (e.g., Glaeser, 2006; Jolls & Sunstein, 2006; Trout, 2005).
For my purposes here, the central point to be noticed is simply that these findings point to a potential conflict between the practical interests of the advocate (who wants to persuade) and what we might think of as normatively-appropriate argumentative conduct.
I now want to consider two other examples that are rather different from these first two. The first two examples concerned cases in which normative questions are raised by certain advocacy practices where the normative considerations concern (in a way) the nature of the practice itself. The next two examples point to normative considerations arising outside the nature of advocacy practices themselves.
2.4 Risk information
The third example requires a brief preface to express a general normative premise, namely, advocates should not knowingly give inaccurate information in support of their claims. This is the sort of premise that almost seems too obvious to state, much less justify. But I do take it for granted that most would think this premise unobjectionable.
So consider the circumstance commonly referred to as “risk communication,” that is, the presentation of information about risks of, for instance, individual behaviors (e.g., smoking), potential disease risks (e.g., risk of cardiovascular disease), environmental health threats (e.g., second-hand smoke), and so forth. Advocates will often find it useful to present risk information as part of their efforts at persuading people to undertake appropriate preventive or protective behaviors. I think we’d take it for granted that such advocates should present accurate risk information, and that the goal should be to give people an accurate picture of their risks (e.g., the risks of cigarette smoking).
But what if persons already overestimate the risk from (e.g.) smoking? Should we try to convince them that their risk is actually not as great as they suppose? This is not a purely theoretical question. There is some evidence that people do overestimate the dangers of smoking and alcohol consumption – and these risk perceptions are related to behavior; that is, persons with greater perceived risk are less likely to smoke or drink (e.g., Lundborg & Lindgren, 2002, 2004). The plain implication is that if people were given accurate information about these risks, they would be more likely to engage in these behaviors.
I can’t sort out here all of the normative questions stimulated by such findings. But, as examples, consider: Do advocates have an affirmative responsibility to correct such misperceptions? Or is it enough if the advocates do not themselves assert incorrect information?
That is, is it permitted that advocates passively exploit the audience’s misunderstandings? Without actually asserting incorrect risk information, advocates might nevertheless (enthymematically) rely on the audience’s misperceptions in constructing their arguments. And, just to make things more complex here, what if the person presenting the risk information is in an information-provider role (e.g., a health care professional), not an advocacy role? Is such a person normatively compelled to correct misunderstandings about the degree of risk? My purpose here is not so much to offer answers to such questions as it is to point to how these social-scientific research findings raise some complications with respect to the normative treatment of advocacy conduct.
Specifically, I want to draw attention to two points. The first is the conflict here between the practical interests of the advocate (hoping to persuade people) and normative interests (e.g., in having communicators convey, or rely on, accurate information). For the persuader to be maximally effective in forwarding the advocate’s point of view may require abandoning what we would ordinarily take to be normatively-desirable practices of advocacy.
Second: These questions are not unique to considerations of argumentative conduct. They reflect long-standing, classic normative questions about weighing ends and means: We have this desired end (e.g., encouraging people not to smoke). and the question is what means we are willing to employ in order to achieve that purpose (e.g., knowingly providing inaccurate information, exploiting the audience’s incorrect beliefs, etc.). These parallel classic questions in moral philosophy about (for instance) “when, if ever, is lying morally justifiable?”
2.5 Self-efficacy appeals
The fourth example concerns (what can be called) self-efficacy appeals. As background: For many behaviors that persuaders might want to encourage, a key barrier to behavioral performance is attitudinal – people aren’t convinced that performing the behavior is a good idea. For instance, consumers may need to be persuaded that a given product is worth purchasing.
But for some behaviors, the primary obstacle to behavioral performance is not attitudinal. Rather, it’s a matter of one’s perceived ability to perform the behavior, commonly called “self-efficacy” or “perceived behavioral control” (e.g., Ajzen, 1991; Bandura, 1977). For example, people may have favorable attitudes about exercising, but nevertheless not engage in those behaviors because of a perceived inability: “I don’t have the time,” “I don’t have the equipment,” “the facilities are too far away,” and so on.
In such circumstances, persuaders obviously should focus on such self-efficacy beliefs. That is, rather than wasting time trying to convince people that exercise is desirable, instead persuaders should focus on convincing people that they do in fact have the ability to perform the action (e.g., Allison & Keller, 2004; for similar research on topics other than exercise, see Blok et al., 2004; Luszcynska, 2004). Notice that this is a straightforward instance of adapting a message to an audience, in which an advocate strategically selects which arguments to make on the basis of which of the audience’s current beliefs need to be changed (for general analyses of this sort of approach, see Fishbein & Yzer, 2003; Van den Putte & Dhondt, 2005). [This seems not quite the same as what van Eemeren and Houtlosser (2001, p. 152) call “selecting a responsive adaptation to audience demand,” which involves “putting the issue in a perspective that accords with the expectations and preferences of the audience” (p. 153). Here, the advocate strategically selects which arguments to make on the basis of which of the audience’s current beliefs needs to be changed.] Indeed, a persuader who does not focus on such beliefs is likely to be unsuccessful.
But this particular persuasive strategy might have a potentially undesirable side effect when used in the context of some health-related behaviors, namely, it might stigmatize those with unhealthy conditions as being personally responsible for their circumstance, even if they are not. (For discussion of such strategies, see Guttman & Ressler, 2001; for broader discussions of ethical aspects of health-related appeals, see Guttman, 1997a, 1997b.) I don’t mean to say that this consequence necessarily guarantees that the strategy is somehow normatively defective; for example, some might find stigmatization unobjectionable here (or in general). But obviously these collateral unintended effects might make us normatively uneasy.
I want to draw attention to two points with this example. The first is that, as in the preceding case, there is here a conflict between the practical interests of the advocate (hoping to persuade people to engage in the behavior) and larger normative interests (e.g., in avoiding inappropriate stigmatization). If the persuader does what is maximally effective in this circumstance, then normatively undesirable consequences may follow.
The second is that this example, like the preceding one, represents a specific realization of common general problems of normative assessment. Weighing the normative worth of actions often involves weighing a combination of desirable and undesirable consequences. In a sense, then, there’s nothing special about this last case, save that it arises in the context of advocacy. And in that way, this example is akin to the preceding one (inaccurate risk information), in that both involve weighing competing normative considerations: The inaccurate-risk-information case involves weighing the desirability of the ends and the means; this case involves weighing the desirability of the ends (the intended effects) and the unintended effects.
3. Conclusion
The examples discussed here are a varied lot. The first two examples (concerning gain-loss message framing and success/failure framing) raise normative questions about advocacy practices on the basis of the intrinsic properties of certain appeals. The second two examples (concerning inaccurate risk perceptions and self-efficacy appeals) raise normative questions about advocacy practices on the basis of considerations outside the practices themselves – considerations of the desirability of the end (the risk perception example) or the unintended effects of the practice (the self-efficacy example).
But even the success/failure framing example is connected to larger contextualizing questions about the appropriate role of health care professionals in advising patients – should they advocate particular courses of treatment? Merely present information to let patients decide? And what if patients are incapable of digesting the information? And this, in turn, leads me to two broader points.
First: Paternalism inheres in persuasion. Advocates undertake advocacy because they think they know what other people should believe and do. And thus there is, to some degree, an inevitable collision between the usual sorts of normative interests of argumentation analysts (who are concerned that a good decision be reached, that the right outcome be obtained, with it being an open question just what the right outcome is) and the practical concerns of advocates (who are also concerned that a good decision be reached – but the advocate already knows what that decision should be). [Perhaps we might say: Advocates are paternalistic about ends (they know what decisions people should make), and argumentation analysts are paternalistic about means (they know how people should go about deciding).] And so, necessarily, larger questions about (for instance) balancing ends and means will inevitably enter into discussions about normatively-proper advocacy conduct. A satisfactory general analysis of normatively desirable argumentative conduct cannot be oriented only to the analysis of argumentative devices themselves, but rather must be situated within a broader understanding of the larger ends sought.
Second (and, in a way, as a consequence of the preceding): In all of this, we can see inscribed various classic ethical conundrums, such as normatively weighing ends and means. I take this to be yet another illustration of the permeability of the boundaries of argumentation studies. The very character of argumentation studies makes it an enterprise that touches many corners of scholarship–and for precisely that reason it is an enterprise for which interdisciplinary conferences like this one are specially valuable.
REFERENCES
Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179-211.
Allison, M. J., & C. Keller (2004). Self-efficacy intervention effect on physical activity in older adults. Western Journal of Nursing Research, 26, 31-46.
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84, 191-215.
Blok, G. A., J. Morton, M. Morley, C.C.J.M.C. Kerckhoffs, G. Kootstra & C.P. M. van der Vleuten (2004). Requesting organ donation: The case of self-efficacy. Advances in Health Sciences Education, 9, 261-282.
Eemeren, F. H. van, & P. Houtlosser (2000). Rhetorical analysis within a pragma-dialectical framework: The case of R. J. Reynolds. Argumentation, 14, 293-305.
Eemeren, F. H. van, & P. Houtlosser (2001). Managing disagreement: Rhetorical analysis within a pragma-dialectical framework. Argumentation and Advocacy, 37, 150-157.
Eemeren, F. H. van, & P. Houtlosser (2005). Strategic manoeuvring. Studies in Communication Sciences, 23-34.
Finocchiaro, M. A. (1992). Asymmetries in argumentation and evaluation. In F. H. van Eemeren, R. Grootendorst, J. A. Blair, & C. A. Willard (Eds.), Argumentation Illuminated, 62-72. Amsterdam: Sic Sat.
Fishbein, M., & M.C. Yzer (2003). Using theory to design effective health behavior interventions. Communication Theory, 13, 164-183.
Glaeser, E. L. (2006). Paternalism and psychology. University of Chicago Law Review, 73, 133-156.
Guttman, N. (1997a). Beyond strategic research: A value-centered approach to health communication interventions. Communication Theory, 7, 95-124.
Guttman, N. (1997b). Ethical dilemmas in health campaigns. Health Communication, 9, 155-190.
Guttman, N., & W.H. Ressler (2001). On being responsible: Ethical issues in appeals to personal responsibility in health campaigns. Journal of Health Communication, 6, 117-136.
Jolls, C., & C.R. Sunstein (2006). Debasing through law. Journal of Legal Studies, 35, 199-241.
Lundborg, P., & B. Lindgren (2002). Risk perceptions and alcohol consumption among young people. Journal of Risk and Uncertainty, 25, 165-183.
Lundborg, P., & B. Lindgren (2004). Do they know what they are doing? Risk perceptions and smoking behaviour among Swedish teenagers. Journal of Risk and Uncertainty, 28, 261-286.
Luszcynska, A. (2004). Change in breast self-examination behavior: Effects of intervention on enhancing self-efficacy. International Journal of Behavioral Medicine, 11, 95-104.
McGettigan, P., K. Sly, D. O’Connell, S. Hill & D. Henry (1999). The effects of information framing on the practices of physicians. Journal of General Internal Medicine, 14, 633-642.
Moxey, A., D. O’Connell, P. McGettigan & D. Henry (2003). Describing treatment effects to patients: How they are expressed makes a difference. Journal of General Internal Medicine, 18, 948-959.
O’Keefe, D. J. (1998). Justification explicitness and persuasive effect: A meta-analytic review of the effects of varying support articulation in persuasive messages. Argumentation and Advocacy, 35, 61-75.
O’Keefe, D. J. (1999). How to handle opposing arguments in persuasive messages: A meta-analytic review of the effects of one-sided and two-sided messages. Communication Yearbook, 22, 209-249.
O’Keefe, D. J. (2002). The persuasive effects of variation in standpoint articulation. In: F. H. van Eemeren (Ed.), Advances in pragma-dialectics, 65-82. Amsterdam: Sic Sat.
O’Keefe, D. J., & J.D. Jensen (2006). The advantages of compliance or the disadvantages of noncompliance? A meta-analytic review of the relative persuasive effectiveness of gain-framed and loss-framed messages. Communication Yearbook, 30, 1-43.
Rothman, A. J., & P. Salovey (1997). Shaping perceptions to motivate healthy behavior: The role of message framing. Psychological Bulletin, 121, 3-19.
Salovey, P., T.R. Schneider & A.M. Apanovitch (2002). Message framing in the prevention and early detection of illness. In J. P. Dillard & M. Pfau (Eds.), The persuasion handbook: Developments in theory and practice, 391-406. Thousand Oaks, CA: Sage.
Trout, J. D. (2005). Paternalism and cognitive bias. Law and Philosophy, 24, 393-434.
Van den Putte, B., & G. Dhondt (2005). Developing successful communication strategies: A test of an integrated framework for effective communication. Journal of Applied Social Psychology, 35, 2399-2420.