Monday, January 20, 2014

Why Social-Behavioral Primers Might Want to be More Self-critical


During the investigation into the scientific conduct of Dirk Smeesters, I expressed my incredulity about some of his results to a priming expert. His response was: You don’t understand these experiments. You just have to run them a number of times before they work. I am convinced he was completely sincere.

What underlies this comment is what I’ll call the shy-animal mental model of experimentation. The effect is there; you just need to create the right circumstances to coax it out of its hiding place. But there is a more appropriate model: the 20-sided-die model (I admit, that’s pretty spherical for a die but bear with me).

A social-behavioral priming experiment is like rolling a 20-sided die, an icosahedron. If you roll the die a number of times, 20 will turn up at some point. Bingo! You have a significant effect. In fact, given what we now know about questionable and not so questionable research practices, it is fair to assume that the researchers are actually rolling with a 20-sided die where maybe as many as six sides have a 20 on them. So the chances of rolling a 20 are quite high.

I didn't know they existed 
but a student who read this post 
brought this specimen to class; she
uses it for Gatherer games.
Once the researchers have rolled a 20, their next interpretive move is to consider the circumstances that happened to coincide with rolling the die instrumental in producing the 20. The only problem is that they don't know what those circumstances were. Was it the physical condition of the roller? Was it the weather? Was it the time of day? Was it the color of the roller's sweater? Was it the type of microbrew he had the night before? Was it the bout of road rage he experienced that morning? Was it the cubicle in which the rolling experiment took place? Was it the fact that the roller was a 23-year old male from Michigan? And so on.

Now suppose that someone else tries to faithfully recreate the circumstances that co-occurred with the rolling of the 20, from the information that was provided by the original rollers. They recruit a 23-year old male roller from Michigan, wait until the outside temperature is exactly 17 degrees Celsius, make the experimenter wear a green sweater, have him drink the same IPA on the night before, and so on.

Then comes the big moment. He rolls the die. Unfortunately, a different number comes up— a disappointing 11. Sadly, he did not replicate the original roll. He tells this to the first roller, who replies: Yes you got a different number than we did but that’s because of all kinds of extraneous factors that we didn’t tell you about because we don’t know what they are. So it doesn’t make sense for you to try replicate our roll because we don’t know why we got the 20 in the first place! Nevertheless, our 20 stands and counts as an important scientific finding.

That is pretty much the tenor of some contributions in a recent issue of Perspectives on Psychological Science that downplay the replication crisis in social-behavioral priming. This kind of reasoning seems to motivate recent attempts by social-behavioral priming researchers to explain away an increasing number of non-replications of their experiments.

Joe Cesario, for example, claims that replications of social-behavioral priming experiments by other researchers are uninformative because any failed replication could result from moderation, although a theory of the moderators is lacking. Cesario argues that initially only the originating lab should try to replicate its findings. Self-replication is in and of itself a good idea (we have started doing it regularly in our own lab) but as Dan Simons rightfully remarks in his contribution to the special section: The idea that only the originating lab can meaningfully replicate an effect limits the scope of our findings to the point of being uninteresting and unfalsifiable.

Show-off! You're still a "false positive."
Ap Dijksterhuis also mounts a defense of priming research, downplaying the number of non-replicated findings. He talks about the odd false positive, which sounds a little like saying that a penguin colony contains the odd flightless bird (I know, I know, I'm exaggerating here). Dijksterhuis claims that it is not surprising that social priming experiments yield larger effects than semantic priming experiments because the manipulations are bolder. But if this were true, wouldn’t we expect social priming effects to replicate more often? After all, semantic priming effects do; they are weatherproof, whereas the supposedly bold social-behavioral effects appear sensitive to such things as weather conditions (which Dijksterhuis lists as a moderator).

Andrew Gelman made an excellent point in response to my previous post that false positive is actually not a very appropriate terminology. He suggests an alternative phrasing: overestimating the effect size. This seems a constructive perspective on social-behavioral priming without any negative connotations. Earlier studies provide inflated estimations of the size of social-behavioral priming effects.

A less defensive and more constructive response by priming researchers might therefore be: “Yes, the critics have a point. Our earlier studies may have indeed overestimated the effect sizes. Nevertheless, the notion of social-behavioral priming is theoretically plausible, so we need to develop better experiments, pre-register our experiments, and perform cross-lab replications to convince ourselves and our critics of the viability of social-behavioral priming as a theoretical construct.“

In his description of Cargo Cult Science, Richard Feynman stresses need for researchers to be self-critical: We've learned from experience that the truth will come out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature's phenomena will agree or they'll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven't tried to be very careful in this kind of work. And it's this type of integrity, this kind of care not to fool yourself, that is missing to a large extent in much of the research in Cargo Cult Science.

It is in the interest of the next generation of priming researchers (just to mention one important group) to be concerned about the many nonreplications (coupled with the large effect sizes and small samples that are characteristic of social-behavioral priming experiments). The lesson is that the existing paradigms are not going to yield further insight and ought to be abandoned. After all, they may have led to overestimated priming effects.

I’m reminded of the Smeesters case again. Smeesters had published a paper in which he had performed variations on the professor-prime effect, reporting large effects (the effects that prompted my incredulity). This paper has now been retracted. One of his graduate students had performed yet another variation on the professor-prime experiment; she found complete noise. When we examined her raw data, the pattern was nothing like the pattern Uri Simonsohn had uncovered in Smeesters’ own data. When confronted with the discrepancy between the two data sets, Smeesters gave the defense we see echoed in the social-behavioral priming defense discussed here: that experiment was completely different from my experiments (he did not specify how), so of course no effect was found.

There is reason to worry that defensive responses about replication failures will harm the next generation of social-behavioral priming researchers because these young researchers will be misled into placing much more confidence in a research paradigm than is warranted. Along the way they will probably waste a lot of valuable time, face lots of disappointments, and might even face the temptation of questionable research practices. They deserve better. 

Sunday, January 12, 2014

Escaping from the Garden of Forking Paths


My previous post was prompted by a new paper by Andrew Gelman and Eric Loken (GL) but it did not discuss its the main thrust because I had planned to defer that discussion to the present post. However, several comments on the previous post (by Chris Chambers and Andrew Gelman himself) leapt ahead of the game and so there already is an entire discussion in the comment section of the previous post about the topic of our story here. But I’m putting the pedal to the metal to come out in front again.

Simply put, GL’s basic claim is that researchers often unknowingly create false positives. Or, in their words: it is possible to have multiple potential comparisons, in the sense of a data analysis whose details are highly contingent on data, without the researcher performing any conscious procedure of fishing or examining multiple p-values.

My copy of the Dutch Translation
Here is one way in which this might work. Suppose we have a hypothesis that two groups differ from each other and we have two dependent measures. What constitutes evidence for our hypothesis? If the hypothesis is not more specific than that, we could be tempted to interpret a main effect as evidence for the hypothesis. If we find an interaction with the two groups differing on only one of the two measures, then we would also count that as evidence. So now we actually had three bites at the apple but we’re working under the assumption that we only had one. And this is all because our hypothesis was rather unspecific.

GL characterize the problem succinctly: There is a one-to-many mapping from scientific to statistical hypotheses. I would venture to guess that this form of inadvertent p-hacking is extremely common in psychology, perhaps especially in applied areas, where the research is less theory-driven than in basic research. The researchers may not be deliberately p-hacking, but they’re increasing the incidence of false positives nonetheless.

In his comment on the previous post, Chris Chambers argues that this constitutes a form of HARKING (Hypothesizing After the Results are Known). This is true. However, this is a very subtle form of HARKING. The researcher isn’t thinking well, I really didn’t predict this but Daryl Bem has told (pp. 2-3) me that I need to go on a fishing expedition about the data, so I’ll make it look like I’d predicted this pattern all along. The researcher is simply working with a hypothesis that is consistent with several potential patterns in the data.

GL noted that articles that they had previously characterized as the product of fishing expeditions might actually have a more innocuous explanation, namely inadvertent p-hacking. In the comments on my previous post, Chris Chambers took issue with this conclusion. He argued that GL looked at the study, and the field in general, through rose-tinted glasses.

The point of my previous post was that we often cannot reverse-engineer from the published results the processes that generated them on the basis of a single study. We cannot know for sure whether the authors of the studies initially accused by Gelman of having gone on a fishing expedition really cast out their nets or whether they arrived at their results in the innocuous way GL describe in their paper, although GL now assume it was the latter. Chris Chambers may be right when he says this picture is on the rosy side. My point, however, is that we cannot know given the information provided to us. There often simply aren’t enough constraints to make inferences about the procedures that have led to the results of a single study.

However, I take something different from the GL paper. Even though we cannot know for sure whether a particular set of published results was the product of deliberate or inadvertent p-hacking, it seems extremely likely that, overall, many researchers fall prey to inadvertent p-hacking. This is a source of false positives that we as researchers, reviewers, editors, and post-publication reviewers need to guard against. Even if researchers are on their best behavior, they still might produce false positives. GL provide suggestions to remedy the problem, namely pre-registration but point out that this may not always be an option in applied research. It is, however, in experimental research.

GL have very aptly named their article after a story by the Argentinean writer Jorge Luis Borges (who happens to be one of my favorite authors): The Garden of Forking Paths. As is characteristic of Borges, the story contains the description of another story. The embedded story describes a world where an event does not lead to a single outcome; rather, all of its possible outcomes materialize at the same time. And then the events multiply at an alarming rate as each new event spawns a plethora of other ones.

I found myself in a kind of garden of forking paths when my previous post produced both responses to that post and responses I had anticipated after this post. I’m not sure it will be as easy for the field to escape from the garden as it was for me here, but we should definitely try.

Thursday, January 9, 2014

Donald Trump’s Hair and Implausible Patterns of Results


In the past few years, a set of new terms has become common parlance in post-publication discourse in psychology and other social sciences: sloppy science, questionable research practices, researcher degrees of freedom, fishing expeditions, and data that are too-good-to-be-true. An excellent new paper by Andrew Gelman and Eric Loken takes a critical look at this development. The authors point out that they regret having used the term fishing expedition in a previous article that contained critical analyses of published work.

The problem with such terminology, they assert, is that it implies conscious actions on the part of the researchers even though—as they are careful to point out--the people who have coined, or are using, those terms (this includes me) may not think in terms of conscious agency. The main point Gelman and Loken make in the article is that there are various ways in which researchers can unconsciously inflate effects. I will write more about this in a later post. I want to focus on the nomenclature issue here. Gelman and Loken are right that despite the post-publication reviewers’ best intentions, the terms they use do evoke conscious agency.

We need to distinguish between post-publication review and ethics investigations in this regard, as these activities have different goals. Scientific integrity committees are charged with investigating the potential wrongdoings of scientists; they need to reverse-engineer behavior from the information at their disposal (published data, raw data, interviews with the researcher, their collaborators, and so on). Post-publication review is not about research practices. It is about published results and the conclusions that can or cannot be drawn from them.

If we accept this division of labor, then we need to agree with Gelman and Loken that the current nomenclature is not well suited for post-publication review. Actions cannot be unambiguously reverse-engineered from the published data. Let me give a linguistic example to illustrate. Take the sentence Visiting relatives can be frustrating. Without further context, it is impossible to know which process has given rise to this utterance. The sentence is a standing ambiguity and any Chomskyan linguist will tell you that it has one surface structure (the actual sentence) and two deep structures (meanings). The sentence can mean that it is frustrating to visit relatives or that it is frustrating when they are visiting you. There is no way to tell which deep structure has given rise to this surface structure.

It is the same with published data. Are the results the outcome of a stroke of luck, optional stopping, selective removal of data, selective reporting, an honest error, or outright fraud? This is often difficult to tell and probably not something that ought to be discussed in post-publication discourse anyway.

So the problem is that the current nomenclature generally brings to mind agency. Take sloppy science. It implies that the researcher has failed to exert an appropriate amount of care and attention; science itself cannot be sloppy. As Gelman and Loken point out, p-hacking is not necessarily intended to mean that someone deliberately bent the rules (and, in fact, their article is about how researchers unwittingly inflate the effects they report; more about this interesting idea in a later post). However, the verb implies actions on the part of the researcher; it is not a description of the results of a study. The same is true, of course, of fishing expedition. It is the researchers who are going on a fishing expedition; it is not the data who have cast their lines. Questionable research practices is obviously a statement about the researcher, as is researcher degrees of freedom.


But how about too-good-to-be-true? Clearly this qualifies as a statement about the data and not about the researcher. Uri Simonsohn used it to describe the data of Dirk Smeesters and the Scientific Integrity Committee I chaired adopted this characterization as well. Still, it has a distinctly negative connotation. Frankly, the first thing I think of when I hear too-good-to-be-true is Donald Trumps hair. And let’s face it: no researcher on this planet wants to be associated—however remotely—with Donald Trump’s hair. 

What we need for post-publication review is a term that does not imply agency or refer to the researcher—we cannot reverse engineer behavior from the published data—and that does not have a negative connotation. A candidate is implausible pattern of results (IPR). Granted, researchers will not be overjoyed when someone calls their results implausible but the term does not imply any wrongdoing on their part and yet does express a concern about the data.

But who am I to propose a new nomenclature? If readers of this blog have better suggestions, I’d love to hear them.