Paper Odyssey

(quick apology to my readers – I can’t seem to convince WordPress to consider my paragraph settings)
In March there was a buzz in our lab. After a long period of number crunching, finally we were able to provide the first biological interpretation of data coming out of our GRiP stochastic simulator, shedding light on some exciting characteristics of the transcription factor target finding process.
  1. The more even unspecific DNA-binding molecules compete for space on the genome, the more noisy becomes the occupancy of specific binding sites. When looking at a population of cells, each with few competing molecules around, the absolute differences between the times these sites are occupied are small compared to cells that see many competing molecules — the absolute differences between cells become considerably larger.
  2. There is a widely accepted notion that ChIP signals correlate with the occupancy and affinity of a genomic binding site. We showed that with increasing concentrations of transcription factor, many low- to mid-affinity sites are becoming occupied to the same degree as high-affinity binding sites. This effect is non-linear and can represent a considerable caveat in the interpretation of ChIP data.

Okay. I admit that these “paper in one sentence” summaries are, well, dense, but we were really excited to see this stuff and got on cracking. Within a few weeks, Radu, Rob and myself had written two manuscripts describing this research. We immediately made these manuscripts available on arXiv, hoping to engage with the community and probably get a little bit of feedback from the community. After all, we believe in open science and do not fear a public debate.

Haldane’s Sieve picked our work up and by the end of April we were on their top viewed papers list. And we got tweets. And retweets. And the occasional email. All in all, we’ve came to the conclusion that our work is well perceived by the public.
How could we be more wrong? Editors and referees explained to us what the public perceives!
Let’s start with story number 1: “The effects of transcription factor competition on gene regulation.

Okay. We were bold. And naive. In it’s first incarnation, this manuscript went to PRL. The Physics Review Letters are somewhat like the holy grail of physics. It’s sort of an equivalent to Nature, but is only read by people with big massive glasses. I admit, I didn’t even know it, but Radu is a big geek by training and felt we should at least try it, as this journal had published similar stories along the same lines, and, you know, he wanted a PRL paper. However, not this time. We received the usual sort of flak, and one referee even concluded that we were not “appealing to any devices typically employed by the physics community to lend intuition for the results” (I love that sentence although I’m not quite sure what it means), but the death sentence was certainly: “While the result about variability in the time to find the target site is of some interest, the authors do not make the case for its importance strongly enough for publication in PRL.” “Some interest”. That’s nice. But as such, nice is the little sister of shit. Alright. Point taken. Let’s move on.

I do a fair bit of reviewing for Nucleic Acids Research (in fact, there’s only OUP books under the Christmas tree each year, if you know what I mean) and thus, I guess, I know what their readership likes, or what their editors and referees are looking for. A complex manuscript like this one requires a little bit of time to review, and thus I already got worried when the editor got back to me within a week. Fortunately, he had sent it out to review. Unfortunately, the referees had evidently not read the bloody thing very thoroughly. Even more unfortunately, on the basis of a wild guess what our manuscript might be describing (which it didn’t) or what we did wrong (which we didn’t), the editor rejected it. I appealed in a long long email.

Just a few excerpts to give you an idea about their problems and our response (readers who haven’t read out arXiv paper can probably skip that, it’s complex stuff): The main criticism raised by both reviewers is the number of simulations (n = 100) we included per experimental condition. We agree this may appear low to the uninitiated reader, but the computational resource required for the simulation of mobile obstacles is on the order of three months per 100 simulations, in contrast to 1-2 days required for immobile obstacles. Hence, we compromise on the lowest number of simulations that provide statistical power to proof our point (as also commonly done in molecular dynamics); in our case a positive Dip test for bi-modality (where we show significant p-values lower than  << 10^-5 in most cases). 

Reviewer #1 further claims that the barrier effect that is present in our simulation is an artificially introduced one that obviously disappears in the presence of mobility. This is wrong. In the methods section of the MS we clearly state that in case the target site is covered by non-cognate molecules, these simulations are discarded: ”We also allow immobile non-cognate TFs to cover the O1 site, which would exclude lacI molecules indefinitely from the O1 site. Thus, we perform 100 simulations for each set of parameters and simulations where the target site is never reached are discarded.”

This one here was a classic, a typical computational biology hater’s comment, to which the editor bought in: “The reviewer further implies experimental validation of our results. We feel, however, that this is clearly beyond the scope of our work and would like to remind the editor that NAR previously published purely theoretical work on the matter, e.g. — Flyvbjerg, H.; Keatch, S. A. & Dryden, D. T. (2006), ‘Strong physical constraints on sequence-specific target location by proteins on DNA molecules’, Nucleic Acids Research 34(9), 2550-2557. — Halford, S. E. & Marko, J. F. (2004), ‘How do site-specific DNA-binding proteins find their targets?’, Nucleic Acids Research 32(10), 3040-3052. — Wunderlich, Z. & Mirny, L. A. (2008), ‘Spatial effects on the speed and reliability of protein-DNA search.’, Nucleic Acids Research 36(11), 3570-3578.”.

Within hours of my appeal, I got this back: “As editors, we always take concerns of authors seriously. I passed your appeal up to the senior editors, who have further examined both your manuscript and the reviews. They got back to me with a firm statement that they support the original decision.” The firm statement was probably FUCK THEM!, because it would have been impossible to read the manuscript, digest the reviews, and see if/how our appeal was appropriate. Funnily enough, the very same senior editor who was so firm on us asked me two days later for my esteemed opinion on another manuscript. I told him of my reservations I had with the way they’re running their show. He was “sympathetic”. Here comes the best part: “We try to be as fair as possible, but the crush of papers we are handling (up by nearly 100% over the past five years without any increase in capacity from Oxford Publishing)  causes us to have to favor the papers that have the highest level of ratings and enthusiasm during the review process.
Alright then. Reviewer enthusiasm. Since when is that an important metric?

On we go to Open Biology, the Royal Society’s new open-access journal. Having debated our annoying case with their senior editor, we gave them a try. Apparently, we got two different referees, because they found something else. The rejection letter started as a shock: “Unfortunately one review believes that the analyis carried out does not permit the conclusions that have been draw.  The other believes the advance to be incremental in nature.”
Uh-oh. …does not permit the conclusions. That sounded bad. Turns out that the referee simply asked for a p-value, which we hadn’t included in the manuscript. That was it. Done in less than two minutes. The other one claimed our work was a mere footnote on somebody else’s simulation… …a toy model published in the late 1980s on the basis of 200 lines of Fortran code.
I never heard back about my appeal. This could have to do with the editor’s experience with computational biology and simulation… …which is close to zilch.

Ultimately, we sent it to BMC Systems Biology. A happy end? I’m not sure yet. It has been under review for the past seven weeks. If they don’t want it, can you guess what’s going to happen with it? Read on!

Now to story number 2: “The influence of transcription factor competition on the relationship between occupancy and affinity.
Let me reiterate, although we’re doing quite abstract stochastic simulations here, the outcome of our research provides a direct handle for the interpretation of ChIP data. This story just yells “genome” at you. At least that’s what we thought.

Genome Research were swift after a week: “I regret that the consensus was that, although the work is likely of interest to those in the field, we are not convinced the advance and scope of the work will sufficiently appeal to our more general genomic readership.” In other words, they were saying “we know that the majority of our readers are probably mis-interpretating large proportions of their genome-wide binding data – but they’re not interesting in your esoteric number crunching”.

Well, take that, GR, there’s another journal that has genome in it: Genome Biology. They didn’t like it either. Same thing: “Although we appreciate that the reported new findings are likely to be of interest to others working in the field, I am afraid we do not feel that the findings represent the kind of significant new insights that would warrant publication in Genome Biology, which is aimed at a broad readership of biologists.” Although, what I really liked, was the editor’s summary how she understood the paper, and that was actually quite accurate.

They also recommended a fast-track to their in-house sister journal BMC Genomics, but at this stage we were afraid that the genomic-sy aspect of the work probably really wasn’t that well perceived, and so we opted for PLoS Computational Biology, really highlighting the computational merits of the work rather than the impact on the life of millions of unaware genomicists… It took them two weeks to come up with this excuse, which still sees me in shock, because I know the people involved very well and usually they’re quite sensible: “Discussions with editors providing the appropriate expertise resulted in the impression that your study is interesting from a theoretical perspective and provides an important counterpoint to the equilibrium thermodynamics-based/inspired models of TF-DNA interaction in vogue today. However, the manuscript is overly theoretical for our journal, especially considering that the topic (TF-DNA occupancy) is one where the community is fairly “data-endowed” and expects novel insights to directly pertain to trends in ChIP data sets.”
Seriously? Yep, there is theory, so it’s not just a video game. But to hear from an editor of a computational biology journal that my work is too, well, ahemm, computational, that must be a joke. Moreover, it’s not the journal of ChIP measurements. Since when do editors have to bend to the expectations of the community? I think there’s a bit of educating that comes with the editorial role, and I’m sure no harm would have been done to teach the “fairly data-endowed” community where they might err in the interpretation of their results. But at least, he gave me a link to PLoS ONE.

So, ultimately, we send the paper to PLoS ONE. It wasn’t all straight forward, but here it was just a time factor. The first round of reviews took a month, but at least for the first time these reviews provided evidence that the reviewers had read and completely understood the paper and its implications. The things they suggested were largely optional, and we were able to submit a revised version within a week. Unfortunately, clashing with a busy period for the editor, it then took them another five weeks to accept. But here we are.

My lesson? Learnt. Why should we waste our time to send stuff to so-called speciality journals, when all they do is tell us that our stuff is already too special and we should send it somewhere else? PLoS ONE. Way to go! And please, have a look at our original arXiv depositions and make up your own judgement. Power to the people!

Leave a Reply




You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>