What the Heck Does Transparency in Scientific Research have to do with Cross Sector-Collaborations?

Yesterday, I headed down to DC’s City Hall, the John A. Wilson Building for a lunchtime talk entitled “Research Transparency: Finding Effective Solutions.” 

If you, like me, are not an academic researcher, this might sound like a dry topic.  But, there were two reasons I chose to attend.  The first is that I appreciate the opportunity to learn, especially in spaces that are related to but, not the same as those I work in.  The author Steven Johnson has called it “the adjacent possible.” The second was the bio of Brian Nosek, who was giving the talk.  He was invited because of his role as the co-Founder and Executive Director of the Center for Open Science, a nonprofit working to enable open and reproducible research practices worldwide.  But his primary research interest as a psychology professor at the University of Virginia is around implicit bias, “the gap between values and practices, such as when behavior is influenced by factors other than one's intentions and goals.”  And this is a topic that interests me greatly.

Nosek started out his talk with Robert K. Merton’s framework on the ethos of modern science which was published in 1942.  In the ethos, Merton laid out the norms of scientific work as well as the counter-norms. 

In reviewing the slide with the norms and counternorms, they felt really familiar to me from my experiences with cross-sector collaborations.  I went back to one of my favorite visuals, developed by Chris Thompson, which explores what cross-sector efforts look like when they are collaborating (norms) and co-blab-orating (counternorms).

When I put the two charts side-by-side, it was a bit eerie to see how much they overlapped:


Even in 1942, Merton recognized the paradox that institutions and systems were structured to incentivize the counternorms.  Again, this is something that I know all too well from cross-sector collaborations where people are being paid to do jobs in their organizations while simultaneously being asked to engage in collaborative structures where no one is the boss of anyone else, there aren’t accountability processes in place. It's easy to guess where people prioritize their time and energy.

Nosek co-founded the Center for Open Science as a platform for shifting the process of research towards the norms.  And being an organization of scientists, they wanted to demonstrate how the counternorms impacted their fields’ ability to create meaningful research.

Nosek highlighted a couple of fascinating experiments.  The first looked at twenty-nine independent research teams who were given a data set and asked to determine if in soccer, referees are more likely to give red cards to dark-skinned players.  (Go here for a more extensive explanation of this study.)

So what did the research teams find? According to FiveThirtyEight.com, “Despite analyzing the same data, the researchers got a variety of results. Twenty teams concluded that soccer referees gave more red cards to dark-skinned players, and nine teams found no significant relationship between skin color and red cards.”  And it showed how the “strategic” decisions of researchers in designing their analysis accounted for the differences.  Yet, in aggregate, it also seems to show that it is important to not just rely on the findings of one study. 

But Nosek and his compatriots didn’t stop there.  They also wanted to delve into research that had been published in peer-reviewed journals, highlighting how despite the high potential for variability, research studies are seldom repeated.  This isn't very surprising, since we privilege the new and the breakthrough.  Nosek and his team selected 100 published, peer-reviewed studies and recruited research teams from across the country to replicate them.  All were replicated using the original methodology, yet only 39% that were repeated came to the same conclusion.  (If you want to dig into this work more – and it’s fascinating -- The Planet Money Podcast did an episode on this in January 2016).  These results underlined the problem of only using one study to "prove" something, as well as how peer-review in which only the paper (not the methodology or data) is shared is not sufficient.

But, the talk was not all doom and gloom.  Nosek shared some of the ways that the Center for Open Science and its compatriots have been making inroads.  Scientific research is a big, dispersed, and de-centralized field, and COS recognizes that there will need to be a lot of approaches to change the culture and mindsets and incentives.  But small changes are having some significant impacts – from recognition of transparency of data and process through badging in journals to the uptake of the Center’s technology platform and framework that support working transparently.

Nosek also shared his belief that the peer review process could be reconceived to strengthen research and accountability.  The traditional scientific process, like the traditional strategic planning process is linear.  It involves a small group of people working in isolation to design, collect and analyze data, report, and publish.  Traditionally, the feedback from the scientific community only comes at the end – in peer review from a small group of readers and after publication.  This would be like releasing a new product only having beta tested it with ten people. The work will either sink or swim making the risk much greater, and the opportunity for learning and improvement is lost.

Nosek noted that though still largely untested, peer review should happen earlier and differently.  He suggested that colleagues -- particularly people who think differently or are even intellectual adversaries – should be invited to collaborate and co-create the research design before collection and analysis even begins.  He also suggested that the co-designed framework could be used as a tool for holding scientists accountable for reporting all of their findings (even things that aren’t new, or didn’t demonstrate meaningful relationships) during peer review prior to publication.


I love how this idea draws on decades of organizational development research, which says that diverse teams who are well-managed produce better results.  And am eager to learn how it influences researchers and how the work and collaborate going forward.

In my career, I’ve done applied work on everything from emergency preparedness and response to health care workforce to regional economic development to poverty.  All of these efforts have been through cross-sector collaborations, and all of them have needed and relied on research to understand the problems, and inform the development of solutions.

During Nosek’s talk, I was heartened to learn that there’s an opportunity for two-way learning between researchers and cross-sector practitioners.  I would be excited to share what I’ve learned about designing and facilitating collaborations effectively in the name of better research.  If you’re interested in connecting about this blog post, or about doing some cross-disciplinary sharing, please get in touch via emailTwitterInstagram, or LinkedIn.

Nosek’s talk was part of a brown bag lunch series, which is hosted by The Lab @ DC. The Lab was created as part of Mayor Bowser’s administration and is housed in City Administrator’s Office of Performance Management (OPM).  It aims to use scientific insights and methods to test and improve policies and provide timely, relevant, and high-quality analysis to inform the District's most important decisions.  Check out future Lab @ DC events (they’re free)!