If I tell you that you should do something that you pretty much already want to do anyway, and my reason is something you think is true, you might sincerely believe that I’ve given you a logical and reasonable argument, even if there is no logical relationship between my conclusion and my reason.
I might say, “You should vote for [the candidate of the party you always support] because [the candidate of the party you hate] did/said this bad thing,” you might feel you’ve been given a logical and reasonable argument. But that isn’t a logical argument at all—it’s just an appeal to rabid factionalism. It feels logical because you’re likely to believe that your commitment to your party is rational, and that supporting the other party is irrational.
But, whether you feel your position is rational or not isn’t actually a good measure. Is there a major premise that you would accept in the abstract, even if it didn’t get the conclusion want? And this statement doesn’t do that. This bad thing the other candidate did—is it something that would cause you to refuse to vote for your party’s candidate? If not, then you don’t have a logical argument; you have rabid factionalism.
If I told you, “You should clean my litterboxes because 2+2 = 4” you would probably catch the logical problem. But it’s no less logical than “You should vote for [the candidate of the party you always support] because [the candidate of the party you hate] did/said this bad thing.” Neither has a defensible major premise.
We don’t tend to catch the logical problems (unless we deliberately work at it) when we like the conclusion and the minor premise. If my evidence is associationally related to my conclusion (if you believe my evidence and you’re sorta open to my conclusion) you won’t notice that problem. If you’re feeling a little guilty about not doing enough around the house, or you feel you owe me a huge favor, then, suddenly, “You should clean my litterboxes because 2 + 2 =4” might seem like a “good” argument to you. It’s only good insofar as it seems to give you the justification to do something you were pretty open to do anyway. But it’s no more logical than it was when you didn’t want to do it.
Associational rhetoric works particularly well when we’re talking about an outgroup. Since we generally consider an outgroup icky, then an argument that says, essentially, “My policies are good because the outgroup is icky,” will genuinely seem to be a logical or reasonable argument, but it’s all association. (The outgroup might actually be icky and the policies disastrous.)
If you really like Chester Burnette as a candidate, and you loathe squirrels, and I say, “Chester Burnette is a great candidate because squirrels are evil!” the argument might seem “logical” to you. I’ve given you a claim, and I’ve given you a reason. I would probably follow up with lots of evidence about evil things squirrels have done. So, you could easily believe that your attitude about Burnette was totally logical.
But, what if Hubert Sumlin also advocated policies that would restrict squirrels? In logical terms, the “major premise” of the “Chester Burnette is a great candidate because squirrels are evil” enthymeme is… well, what is it? An enthymeme is supposed to be a compressed syllogism.
A compressed syllogism would have the major premise of “Everyone who hates squirrels is a great candidate,” a minor premise (the evidence) of “Chester hates squirrels,” and the conclusion that “Chester is a great candidate.”
Look at it this way. Imagine that I said, “Hubert Sumlin is a Nazi because he wears a brown shirt,” and you really don’t like Sumlin. You might notice he really does sometimes wear a brown shirt, and, of course Nazis did too! That’s associational thinking, because it ignores the major premise. That’s only a logical argument if you are willing to say that everyone who wears a brown shirt is a Nazi. You aren’t (or shouldn’t be, anyway), and you also either need to say that Hubert is just as good a candidate as Chester or else your argument isn’t logical.
Associational reasoning isn’t necessarily bad. I happen to think it’s really helpful when you’re brainstorming, and it’s clear from the history of science that associational reasoning has had some tremendous benefits. But, like arguments from identity, it’s just one data-point. It is useful, but not sufficient, for democratic deliberation. It isn’t policy argumentation.
What’s useful about thinking in terms of logic and not association is that it helps us step back from what social psychologists call motivated reasoning. We can always find a reason to do something we want to do anyway—we are motivated to find reasons to support our ingroup, justify what we want to do, rationalize away something awful we’ve done. But being able to attach a reason to a belief doesn’t make a belief reasonable.
 Sometimes when I make this argument, people will say, “I don’t care if it’s logical to support Chester because he hates squirrels”—I just do. Well, that’s fine, but you don’t support Chester because he hates squirrels; you just support Chester. And just admit that your opposition to Chester’s critics is just ingroup loyalty–you don’t value fairness across groups.