Doorgaan naar hoofdcontent

Posts

Posts uit 2015 tonen

Stepping in as Reviewers

Some years ago, when I served on the Academic Integrity Committee investigating allegations of fraud against Dirk Smeesters, it fell upon me to examine Word documents of some of his manuscripts (the few that were not “lost”). The “track changes” feature afforded me a glimpse of earlier versions of the manuscripts as well as of comments made by Smeesters and his co-authors. One thing that became immediately obvious was that while all authors had contributed to the introduction and discussion sections, Smeesters alone had pasted in the results sections. Sometimes, the results elicited comments from his co-authors: “Oh, I didn’t know we also collected these measures” to which Smeesters replied something like “Yeah, that’s what I routinely do.” Another comment I vividly remember is: “Wow, these results look even better than we expected. We’re smarter than we thought!” More than a little ironic in retrospect. On the one hand I found these discoveries reassuring. I had spent many hours

Diederik Stapel and the Effort After Meaning

Sir Frederic, back when professors still looked like professors. Take a look at these sentences: A burning cigarette was carelessly discarded. Several acres of virgin forest were destroyed. You could let them stand as two unrelated utterances. But that’s not what you did, right? You inferred that the cigarette caused a fire, which destroyed the forest. We interpret new information based on what we know (that burning cigarettes can cause fires) to form a coherent representation of a situation . Rather than leaving the sentences unconnected, we impose a causal connection between the events described by the sentences. George W. Bush exploited this tendency to create coherence by continuously juxtaposing Saddam and 9-11 , thus fooling three-quarters of the American public into believing that Saddam was behind the attacks, without stating this explicitly. Sir Frederic Bartlett proposed that we are continuously engaged in an effort after meaning. This is what r

p=.20, what now? Adventures of the Good Ship DataPoint

You’ve dutifully conducted a power analysis, defined your sample size, and conducted your experiment. Alas, p=.20. What now? Let’s find out. The Good Ship DataPoint* Perspectives on Psychological Science’s first registered replication project , RRR1, was targeted at verbal overshadowing, the phenomenon that describing a visual stimulus, in this case a human face, is detrimental to later recognition of this face compared to not describing the stimulus. A meta-analysis of   31 direct replications of the original finding provided evidence of verbal overshadowing. Subjects who described the suspect were 16% less likely to make a correct identification than subjects who performed a filler task. One of my students wanted to extend (or conceptually replicate) the verbal overshadowing effect for her master’s thesis by using different stimuli and a different distractor task. I’m not going to talk about the contents of the research here. I simply want to address the question that

The End-of-Semester Effect Fallacy: Some Thoughts on Many Labs 3

The Many Labs enterprise is on a roll. This week, a manuscript reporting Many Labs 3  materialized on the already invaluable Open Science Framework. The manuscript reports a large-scale investigation, involving 20 American and Canadian research teams, into the “end-of-semester effect.” The lore among researchers is that subjects run at the end of the semester provide useless data. Effects that are found at the beginning of the semester somehow disappear or become smaller at the end. Often this is attributed to the notion that less-motivated/less-intelligent students procrastinate and postpone participation in experiments until the very last moment. Many Labs 3 notes that there is very little empirical evidence pertaining to the end-of-semester effect. To address this shortcoming in the literature, Many Labs 3 set out to conduct 10 replications of known effects to examine the end-of-semester effect. Each experiment was performed twice by each of the 20 participating teams: on