Skip to content

Could ad hominem arguments sometimes be OK?

By Brian D. Earp

Follow Brian on Twitter by clicking here.

Could ad hominem arguments sometimes be OK? 

You aren’t supposed to make ad hominem arguments in academic papers — maybe not anywhere. To get us on the same page, here’s a quick blurb from Wikipedia:

An ad hominem (Latin for “to the man” or “to the person”), short for argumentum ad hominem, is a general category of fallacies in which a claim or argument is rejected on the basis of some irrelevant fact about the author of or the person presenting the claim or argument. Ad hominem reasoning is normally categorized as an informal fallacy, more precisely as a genetic fallacy, a subcategory of fallacies of irrelevance.

Some initial thoughts. First, there are some clear cut cases where an ad hominem argument is plainly worthless and simply distracting: it doesn’t help us understand things better; it doesn’t wend toward truth. Let’s say that a philosopher makes an argument, X, concerning (say) abortion; and her opponent points out that the philosopher is (say) a known tax cheat — an attempt to discredit her character. Useless. But let’s say that a psychologist makes an argument, Y, about race and IQ (i.e., that black people are less “intelligent” than white people), and his opponent points out that he used to be a member of the KKK. Well, it’s still useless in one sense, in that the psychologist’s prior membership in the KKK can’t by itself disprove his argument; but it does seem useful in another sense, in that it might give us at least a plausible reason to be a little bit more cautious in interpreting the psychologist’s results.

So we have to zoom in on what exactly the fallacy is–or at least what the problem is. On the Wikipedia definition above, a claim or argument has to be “rejected on the basis of some irrelevant fact about the … person presenting the claim or argument” in order to count as truly fallacious. This can be broken up. First — “rejected” … What if I don’t outright reject an argument by (merely) referring to some ad hominem information, but rather share this information as a way of, say, priming the reader or the listener for doubt? Then if I go on to give some “proper” arguments, am I in the clear?

Second — “on the basis of” … What if I reject the argument on some other, non-fallacious, basis, but I think that adding in the ad hominem information will really “seal the deal” in terms of convincing my reader that the argument is no good? Is it OK to “seal the deal” in this way, since I’m not rejecting the argument on that basis alone?

Third — “irrelevant” … What if the information is, in some sense, relevant? In the KKK example, the relevance might be that the psychologist is likely (or at any rate, likelier than others) to have extra-scientific motivations for arriving at the conclusion he supposedly “found” with his research. This might have biased him in terms of study design, statistical fiddling, etc. While this won’t suffice to invalidate his arguments, it is certainly pertinent to our decision-making process about how to go about evaluating his work (i.e., more skeptically or with a finer-toothed comb than we otherwise might do). So is it OK to bring it up? And if so, where and how? In a formal academic paper? In frustrated emails with my colleagues?

There are of course more and less subtle ways of introducing “personal” information into an argument. One ubiquitous trick is to casually reference your opponent’s university or research institution, if it’s not particularly prestigious, or your supporter’s institution, if it is. Is that information strictly relevant? Probably not. Might it shift the reader’s perspective in terms of how they intuitively receive the argument? Probably yes. So, is this OK to do?

Some take-home questions:

1. Must the ad hominem fallacy always be avoided? If so, on what grounds? In what contexts?

2. When is it OK (or not OK) to introduce “personal information” into an academic argument — even if it doesn’t rise to the level of a fallacy? When is this information useful for getting at the truth? When is it rhetorically useful? When does it backfire?

3. When does avoiding personal-contextual angles actually undermine our ability to understand an argument or phenomenon?

I look forward to your thoughts and stories.


See Brian’s most recent previous post by clicking here.

See all of Brian’s previous posts by clicking here.

Follow Brian on Twitter by clicking here.

Share on

6 Comment on this post

  1. Your definition of ad hominem correctly notes that irrelevance is a necessary criterion, then you ask what if it’s relevant. The answer is that then it’s not ad hominem. I think what you mean to be saying could be framed better. I think you’re really getting at the fact the original latin “to the person” is not as good as the definition you cited, and that sometimes people do shorthand the meaning of this term as being simply “an argument against/discrediting the person”. But the remark about KKK, even though it may instantly create prejudice, only becomes valid if it gives a possible hint as to motive.

    And it doesn’t really invalidate a factual claim, it invalidates only testimonial claims. For example, we require people doing reporting or testifying to declare conflicts of interest, so this is a form of due diligence on the absence of such a claim to assure that the person offering information is not motivated by some motive of personal gain. hen again, as they say, absence of proof is not proof of absence, so really one must be suspicious of any claim for which this kind of information would negate it.

    Life is rarely crisp and clean like a debate room, not just in terms of data exchanged but in terms of what is thevenue or what is beginning and end of the discussion. So sometimes the output of a conversation isn’t an answer. Sometimes it is questions to pursue elsewhere or elsewhen. And among the valid questions to result are “Is there a hidden flaw?” Issues like this can be keys to flaws because individuals and groups have characteristic mechanisms for pursuing very complex spin agendas in the modern world, often drawing on elaborately constructed webs of misinformation that make sorting through the arguments very complex. Knowing that it’s a KKK agenda, for example, might give the clue someone needed to be able to unravel the argument in a way that doesn’t require relying on the fact of the KKK. So dismissing that as an irrelevant fact is not correct.

    As with many conflict of interest declarations, the noting of the fact is not a claim of invalidation, it merely allows exploration of the possibility that there could be a problem created. Many of the most ethical people declare conflicts of interest that turn out to be in fact not relevant, but they do so on a theory that it’s still well to allow the option of investigation and that proper ethics will withstand scrutiny.

    Then again, because of the almost overly powerful risk of prejudice, it’s a power that should be used sparingly. In part this means those who “cry wolf” by offering such information too often may themselves find others looking at such negatives with an equally skeptical eye. (If I recall correctly from Aristotle, virtue is the mean between unreasonable extremes. And usually the location of that mean is messy, but the point is that it’s on neither of the extreme ends, as here it means neither never mentioning nor always mentioning this kind of thing.)

    1. Really appreciate your thoughts, Kent — thanks. I think you’re right I could have framed things better. What I was trying to say by noting that the definition requires “irrelevance” and then asking “what if the info is relevant” was not, could the argument still technically count as meeting the requirements for being an ad hominem fallacy, but rather, couldn’t the argument still be problematic in some way, even *if* the information was, in some sense, relevant.

  2. Right, so I think maybe logical relevance and conversational relevance are not the same. In a logical proof, an assertion is not relevant unless it supports a yet-to-come claim, often filling a gap that would not otherwise be filled. Not only is conversation not logical at all, but even to the extent it is, it’s unordered and full of redundancy. And in a proof there’s a sense in which relevance is a kind of causality in that it lays essential foundation for what will come next. But in the real world, to the extent that a proof is being constructed at all, you kind of have to work forward and backward in time to construct the whole tree of what’s going on and see that the tree is being assembled like a jigsaw puzzle, dead-reckoning the pieces. So what counts as relevance may not correspond to causality, since a relevant thing is any fact that occurs anywhere in the tree, not necessarily respecting order nor even the same part of the tree. It’s a wonder spoken language is intelligible at all.

    Relevance in conversation perhaps means just necessary in any way to completing an overall picture, though maybe with bonus points for seeming more relevant, or more immediately relevant, if you’re working on the same part of the picture as everyone else. In computer search, which all of this dances around, the issues of algorithmic complexity would dominate here since it’s possible for the tangle to get so bad that you’re kept from resolving the matter at all. The order of finding things, not just the ultimate connectivity of things, has to matter, too in ways that I’m not familiar with terminology for. There’s a tendancy in functional analyses of both computer programs to think that any equivalent formulation is as good as any other when speaking abstractly (Turing equivalence and all that), but at some point the chasm of computability that is created, even without hitting a bona fide Halting Problem, can still be bad enough that it exceeds the useful time that a processor will go without crashing or a human will live or even just a human will talk to you before tapping his foot and losing patience.

    But you’re right as well that the operators are fuzzier. We don’t say things with scientific accuracy, so what does and doesn’t imply something is not crisply drawn like in logic. We say blunt things and assume that of the myriad implications of each such blunt thing, the intended part will show through. So there’s sifting to do. And we say things that are probabilistic or ambiguous.

    Ambiguity comes in several kinds. You can say something that might mean one thing or might mean another because the notation is not precise. Or you can say something that might not be clearly expressed, and would be clear if only the notation had been better conveyed. Or you can know that you don’t know and say something that is both precise and clearly communicated but the information that is conveyed is that you don’t know which of two things something could be. Probabilities presumably have that same dimensionality, where you could attached fractional confidence to any of those axes.

    And so the statements by the person offering the evidence could be ambiguous in multiple ways to varying degrees of confidence. So it’s hardly surprising human conversation would be littered with meta-guidance intended to help navigate the web of ambiguity better, since it must be trivial to make this space pragmatically, if not theoretically, non-computable. The moreso if you think one or more parties to the conversation might intentionally or unintentionally be creating roadblocks to your successful navigation. It’s information warfare in miniature, just getting through a conversation. And if it takes discrediting an opponent to get to the result, that’s part of it. We may assume that everyone busy working on a proof together is working together toward the same goal, after all, and in information warfare, you don’t have that luxury. There are rat holes you might go down and never come out of, and some players in the game know it and want you going down that.

    All of the best AI search is based on making guesses and following likely paths because it’s known that doing an orderly search of everything is too hard. I think these ad hominems are like that.

    One last thought, and sorry for running on: In the 1960’s and 70’s when I was in school, we used to speak of the information explosion that was coming. It used to be a struggle for information. People used to clamor to learn things because it was hard to find info. You were lucky to own an encyclopedia because it had information. And you read it like the web, even though it didn’t change, because how else would you become worldly? Nowadays, the explosion has happened. People no longer value the finding of information, they value the filtering of it. Or should. No one fears they will not have access to information. They fear they will receive information that is useless that will keep them from getting to information they care about. To some extent, these are two sides of the same coin, but the social skills and heuristics that work in these worlds are quite different. It could be that the “ad hominem” is one of those things that needs to change, to become more refined, because it better suits the new world than it did the old. Perhaps it has more cases, for example. I bet it is only one exemplar of a set of things that we’re only gradually coming to see as different.

    If you want a place to go next, a nagging clue I’ve been holding onto is a US Supreme Court remark about free speech: That the answer to bad speech is more speech. In principle I agree with that. They’re saying that if someone defames you, for example, rather than risk chilling speech, why not empower you to speak, too, defending yourself. But as noted in the US with unlimited speech due to the Citizens United ruling, money can buy a LOT of speech, and you can drown people in it. This again is an issue of computational power and speed, not of the ultimate truth. So it feels like it’s related to this discussion. Maybe this area of examining computational speed/complexity as a practical barrier to logical implication and truth and justice, and how it impacts the modern world is worth doing.

    I thought when I heard that “answer to speech is more speech” remark that “The Court just hasn’t yet truly confronted the notion of flaming.” Flaming is the literal burying of someone in so many accusations, like Obama about Kenya or his birth certificate, or like Climate Change with deniers pushing stupid little things incessantly. Smokescreen, I suppose. But this can be raised to such high art that it will inevitably be back before the Court, there just isn’t yet the right case.

    Maybe there’s a student that can aggregate this and study it in more detail. My thoughts here are in jumbled form and, back to the heading topic, what is relevant among what I’m saying and what is not may have been obscured.

    Thanks for indulging me to write down some thoughts on something I’d likely not have gotten to elsewhere.

  3. The following was an excerpt from Wikipedia’s entry on ad hominem (Excessed in April, 2012. Excerpt seems to no longer appear):

    Conflict of Interest: Where a source seeks to convince by a claim of authority or by personal observation, identification of conflicts of interest are not ad hominem – it is generally well accepted that an “authority” needs to be objective and impartial, and that an audience can only evaluate information from a source if they know about conflicts of interest that may affect the objectivity of the source. Identification of a conflict of interest is appropriate, and concealment of a conflict of interest is a problem.

    Currently found in Wikipedia article:
    When an ad hominem argument is made against a statement, it is important to draw a distinction whether the statement in question was an argument or a statement of fact (testimony).

    Doug Walton, Canadian academic and author, has argued that ad hominem reasoning is not always fallacious, and that in some instances, questions of personal conduct, character, motives, etc., are legitimate and relevant to the issue, as when it directly involves hypocrisy, or actions contradicting the subject’s words.

  4. Commenting very briefly with my personal view, I should note that under an ideal rational Bayesian approach, an evidence regarding someone would be considered as any other evidence is. Stretching this to real world absurdity, the fact someone is a male, non-white, coming from a poor third world country should be always counted as evidence for him being a mugger in the streets of Oxford. As, I would not doubt it, be in fact an evidence for that. But unfortunately, human beings have evolved to be social machines, adaptations’ executors for maximizing their fitness. Hence we will tend to interpret any ethnic or group pertaining issue very strongly, and to have tendencies towards parochialism, racism and whatnot. Not only that, but due to fundamental attribution error, we already tend to over attribute things as effects of people’s dispositions rather than as external contingencies. Therefore, we better counteract that tendency by having a fake deontological rule (as they always are, for a utilitarian like me) partly prohibiting that kind of reasoning. But, in careful academic settings, I would hold we ought to relax those fake rules in proportion to the reasoning quality of the members involved. I believe the case of the KKK psychologist is straightforward in most settings, while I would only be completely OK with being considered a high probability mugger in the middle of a argument only by very rational thinkers. Should someone explicitly presents my skin colour as minor evidence for my reasoning being wrong due to lower IQ, I would (or rather should) calmingly present the counterevidence and continue with the discussion, only the highly speculative counterfactual worlds where moral enhancement was delivered flawless for the whole humanity.

  5. There are some reverse cases we might as well mention, too, where knowing personal information might (sometimes fairly, sometimes not) magnify rather than diminish the strength of argument. For example, whether rightly or wrongly, when talking discrimination, I may give more weight to stories from people I think reasonably might have been discriminated against (i.e., from women or folks from races that have traditionally been discriminated against).

    Is there such a thing as “ad hominem praise”? That is, is there a name for the inverse situation where information about a person gives an unfair boost to, rather than unfairly diminishes, a particular argument? The Wikipedia entry for “ad hominem” does not mention this case. But a recent case in the news seems both relevant and, importantly, entertaining:

    Also, even conviction of a crime may sometimes be an argument force magnifier. For example, some crimes about drug use or sexuality seem to be politics carried out by other means, and so they become badges of honor or tribal markers. I’m not sure that’s always good, but it happens. Depending on which side of the politics you’re on, knowing of a conviction might have either a positive or negative effect on you.

    It also now occurs to me that another related phenomenon is the so-called “dog whistle” effect, where only certain people in the audience are capable of hearing the boosting or diminishing, and others who are not tuned to the dog whistle “frequency” are oblivious. Knowing whether a person speaking is a member of a group that routinely communicates by dog whistles might legitimately be important to persuading someone of an interpretation of their remarks. (That’s assuming one believes that debate actually leads to persuading. I guess there are studies suggesting that many people just decide that the debate winner is the person they like the most. Another reminder of why these ad hominems have to be used so carefully: They have the power to make you like or dislike someone, and that may have overly strong persuasive effect not just because it changes your view of the argument but because, so some say, it directly decides debate outcome notwithstanding argumentation.)

Comments are closed.