Monday, August 10, 2015

On radical manuscript openness

One of my papers that has attracted a lot of attention lately is "The Fallacy of Placing Confidence in Confidence Intervals," in which we describe some of the fallacies held by the proponents and users of confidence intervals. This paper has been discussed on twitterreddit, on blogs (eg, here and here), and via email with people who found the paper in various places.  A person unknown to me has used the article as the basis for edits to the Wikipedia article on confidence intervals. I have been told that several papers currently under review cite it. Perhaps this is a small sign that traditional publishers should be worried: this paper has not been "officially" published yet.


I am currently wrapping up the final revisions on the paper, which has been accepted pending minor revisions at Psychonomic Bulletin & Review. The paper has benefited from an extremely public revision process. When I had a new major version to submit, I published the text and all code on github, and shared it via social media. Some of resulting discussions have been positive, others negative; some useful and enlightening, others not useful and frustrating. Most scientific publications almost exclusively reflect input from the coauthors and the editors and reviewers. This manuscript, in contrast, has been influenced by scores of people I've never met, and I think the paper is better for it.

This is all the result of my exploring ways to make my writing process more open, which led to the idea of releasing successive major versions of the text and R code on github with DOIs. But what about after it is published? How can manuscript openness continue after the magic moment of publication?

One of the downsides of the traditional scientific publishing model is that once the work is put into a "final" state, it becomes static. The PDF file format in which articles find their final form  and in which they are exchanged and read  enforces certain rigidity, a rigor mortis. The document is dead and placed behind glass for the occasional passerby to view. It is of course good to have a citable version of record; we would not, after all, want a document to be a moving target, constantly changing on the whim of the authors. But it seems like we can do better than the current idea of a static, final document, and I'd like to try.

I have created a website for the paper that, on publication, will contain the text of the paper in its entirety, free to read for anyone. It also contains extra material, such as teaching ideas and interactive apps to assist in understanding the material in the paper. The version of the website corresponding to the "published" version of the paper will be versioned on github, along with the paper. But unlike the paper at the journal, a website is flexible, and I intend to take advantage of this in several ways.

First, I have enabled hypothes.is annotation across the entire text. If you open part of the text and look in the upper right hand corner, you will see three icons that can be used to annotate the text:
The hypothes.is annotation tools.
Moreover, highlighting a bit of text will open up further annotation tools:

Highlighting the text brings up more annotation tools.
Anyone can annotate the document, and others can see the annotations you make. Am I worried that on the Internet, some people might not add the highest quality annotations? A bit. But my curiosity to see how this will be used, and the potential benefits, outweighs my trepidation.

Second, I will update the site with new information, resources, and corrections. These changes will be versioned on github, so that anyone can see what the changes were. Due to the fact that the journal will have the version of record, there is no possibility of "hiding" changes to the website. So I get the best of both worlds: the trust that comes with having a clear record of the process, with the ability to change the document as the need arises. And the entire process can be open, through the magic of github.

Third, I have enabled together.js across every page of the manuscript. together.js allows collaboration between people looking at the same website. Unlike hypothes.is, together.js is meant for small groups to privately discuss the content, not for public annotation. This is mostly to explore its possibilities for teaching and discussion, but I also imagine it holds promise for post-publication review and drafting critiques of the manuscript.
The together.js collaboration tools allow making your mouse movements and clicks visible to others, text chat, and voice chat.

Critics could discuss the manuscript using together.js, chatting about the content of the manuscript. The communication in together.js is peer-to-peer, ensuring privacy; nothing is actually being managed by the website itself, except for making the collaboration tools available.

The best part of this is that it requires no action or support from the publisher. This is essentially a sophisticated version of a pre-print, which I would release anyway. We don't have to wait for the publishers to adopt policies and technologies friendly for post-publication peer review; we can do it ourselves. All of these tools are freely available, and anyone can use them. If you have any more ideas for tools that would be useful for me to add, let me know; the experiment hasn't even started yet!

Check out "The Fallacy of Placing Confidence in Confidence Intervals," play around with the tools, and let me know what you think.

11 comments:

  1. This is awesome! Such a great idea, and well-implemented.

    I know there are a number of working parts for the online publication. But would it be possible for you to set up a template for people to easily set up the same for their paper? If you put the template on Github it would make it easy for others to fork it and use for their own papers. There is also a way to host pages on Github, for those who don't have their own webhost.

    I'd be happy to help if you need it. Several of my colleagues and I tried to set up a website for online publishing including several of these tools, but it never got off the ground for various reasons. So if I can enable even some aspect of that plan, I'd be happy to! Tweet me @marvelousjeff if you're interested :)

    ReplyDelete
    Replies
    1. I'm going to try to make this available via an R package; once I have some time, I'll put up an initial go on github and tweet it.

      Delete
  2. Congratulations! It looks like a refinement of PeerJ/paper-now and my article Molecular computers.
    One important thing is that articles like that facilitate a more direct form of validation than peer review.
    Because peer review is a social form of validation (somebody, most of the times anonymous, claims the work is valid), while these articles provide the data and programs which allow the reader to validate the work.

    ReplyDelete
  3. This comment has been removed by a blog administrator.

    ReplyDelete
  4. Great way to publish. I like the reader interaction, the interactive visualisations, the choice of formats.

    To me (as an open source software lover), the form is not only nice & pleasant, but makes the content more credible as well. I know, a fallacy ;-)

    ReplyDelete
  5. This comment has been removed by the author.

    ReplyDelete
  6. This comment has been removed by a blog administrator.

    ReplyDelete
  7. The mission statement of online proofreading services is to meet the needs of busy researchers and professionals. Online proofreading services definitely agree you write well but errors are more recognize by someone whose eyes has been trained to find and correct grammatical mistakes especially the common typo, spelling, and capitalization mistakes. See more scientific manuscript editing

    ReplyDelete