Open Science

Open Science

The School of Psychology is committed to Open Science. We aim to produce research that is both transparent and accessible. Transparency means we believe research should be conducted and reported in a way that ensures it is reproducible. Accessibility means we believe the public should have unrestricted access to our discoveries.

Guidance/FAQs on open science practices, examples of best practice from researchers in the school and links to helpful resources can be found below. These came out of school-wide discussions and the recommendations of the school’s Open Science Working Group.

The distinction between confirmatory and exploratory research

If you can make a well-grounded, specific prediction about the effect of an experimental manipulation or relationship between measured variables, your experiment is confirmatory. In contrast, exploratory experiments are useful for generating and refining hypotheses, validating measurements, and establishing boundary conditions. Most research programmes will involve both types of experiments, unless you are working within well-established paradigms.

Confirmatory experiments must be planned ahead to prevent bias. Decisions about details like exclusion criteria, sample size, and statistical treatment of the data should be made and documented before the data are viewed. This plan should also document an unambiguous statement describing the criteria for accepting the hypothesis. Adjusting the plan based on the data can inflate effects sizes, and lead to over-confidence in results.

Pre-registering a research experiment or project

Pre-registration entails logging an experiment plan with a time-stamp on a public repository like the Open Science Framework. For confirmatory research, this should be standard practice. For exploratory research, it can be helpful for formalizing a project plan, but it is less essential. If you are concerned about others seeing your plan, you can apply an embargo. But be aware that after this embargo, it will be part of the public record.

When you log a pre-registration, consider it a commitment to sharing the outcome of the planned project and linking it to the pre-registration. Ideally, by the end of a project’s lifecycle, your pre-registration will be accompanied by the full set of anonymised data, and a link to the empirical report (published or archived). Here is one example: https://osf.io/r3fu3/.

Some guidance on how to prepare pre-registrations:

https://www.cos.io/initiatives/prereg

Some examples from the school of published papers with links to their pre-registrations:

Fleur, D. S., Flecken, M., Rommers, J., & Nieuwland, M. S. (2020). Definitely saw it coming? The dual nature of the pre-nominal prediction effect. Cognition, 204, 104335.

The registration:  https://osf.io/6drcy/registrations

Mahon, A., Clarke, A.D.F. & Hunt, A.R. (2018). The role of attention in eye movement awareness. Attention, Perception & Psychophysics, 80, 1691-1704.

The registration: https://osf.io/j3fv2/

Swainson, R., Prosser, L., Karavasilev, K., Romanczuk, A. (2021). The effect of performing versus preparing a task on the subsequent switch cost. Psychological Research, 85, 364-383.

The registration: https://osf.io/yqc9q

Should I pre-submit my project plan?

If you already have a well-specified hypothesis and a plan for your methods and analysis, there are substantial benefits to be gained by submitting this plan to a journal that accepts pre-registered reports and replications. The peer-review process will provide useful feedback on your planned project before you invest the effort in carrying it out. If the paper is accepted, you can carry out the research with this guarantee, which is especially important for larger, riskier, and more resource-intensive projects, or for replications, both of which can be difficult to publish if they fail to confirm their hypotheses.

A list of journals that accept pre-registered reports:

https://www.cos.io/initiatives/registered-reports

Examples of pre-registered reports and replications from the school:

Kopiske, K. K., Bruno, N., Hesse, C., Schenk, T., & Franz, V. H. (2016). The functional subdivision of the visual brain: Is there a real illusion effect on action? A multi-lab replication study. Cortex, 79, 130-152.

https://abdn.elsevierpure.com/en/publications/the-functional-subdivision-of-the-visual-brain-is-there-a-real-il

Clarke, A.D.F, Barr, C. & Hunt, A.R. (2016). The effect of visualization on visual search performance. Attention, Perception & Psychophysics 78, 2357–2362.

http://aura.abdn.ac.uk/bitstream/2164/9033/1/visualiseSearch.pdf

How big should my sample be?

Increasingly, psychology journal editors are declining to review papers that do not provide an adequate justification for their sample size. This is a positive development for our field, which has been criticized for having a history of publishing under-powered experiments. Below are some papers that provide some helpful guidance on some problems and possible solutions for estimating effect sizes, calculating power, and justifying your sample size.

Brysbaert M. (2019). How Many Participants Do We Have to Include in Properly Powered Experiments? A Tutorial of Power Analysis with Reference Tables. Journal of Cognition, 2(1), 16. https://doi.org/10.5334/joc.72

Brysbaert, M., & Stevens, M. (2018). Power Analysis and Effect Size in Mixed Effects Models: A Tutorial. Journal of Cognition, 1(1), 9. DOI: http://doi.org/10.5334/joc.10

Correll, J., Mellinger, C., McClelland, G. H., & Judd, C. M. (2020). Avoid Cohen’s ‘Small’, ‘Medium’, and ‘Large’ for Power Analysis. Trends in Cognitive Sciences, 24(3), 200-207. https://doi.org/10.1016/j.tics.2019.12.009

Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample-size planning for more accurate statistical power: a method adjusting sample effect sizes for publication bias and uncertainty. Psychological Science, 28, 1547-1562.

https://doi.org/10.1177/0956797617723724

What if I need to make changes to a project plan?

Research is often a rapidly-changing landscape, and it can be difficult to predict and plan for every contingency. If you need or wish to make adjustments to the research plan after you’ve seen your results, the first principal should be transparency: Justifiable improvements to the plan that were made during the course of the project need to be carefully documented and disclosed. Where possible, the original plan should be carried out in parallel to one with alterations, and the results following the registered plan should be made available for the public record. If this is not possible, you may need to reconsider whether the experiment is really confirmatory or if it has crossed over to being exploratory.

Where should I submit my paper to ensure it will be available to the public?

An important selection criteria for journals should be their access policy. Journals fall into two broad categories: open access or subscription-based. Open access journals charge fees to authors but make the final published version of the paper freely available to the public. Subscription-based journals charge fees to the end user (the library or individual) to access to published journal articles. However, most subscription-based journals allow the author to retain copyright of the original text of the article, before the journal’s post-acceptance services (copy editing, typesetting). This means your copy of the manuscript can (and should) be made available to the public for free (often referred to as “green open-access”). Some journals impose lengthy embargoes and restrictions on when and where you can share your version of the paper, however, so check their open access policy carefully. This is a useful tool for doing so: https://v2.sherpa.ac.uk/romeo/ .

The Center for Open Science has a grading scheme for evaluating journals based on their open science practices. https://topfactor.org/  This reflects the extent to which they encourage open science practices such as pre-registration, pre-submission, open data, and open materials.

Can/should I share my manuscript before publication?

It is common practice to share manuscripts on a public archive, such as PsyArXiV or BioArXiV. This gets your research findings out to other researchers and the general public quickly, without having to wait for lengthy review and editorial decisions, revisions, and resubmissions. It also gives your research a permanent DOI which makes it cite-able by others right away. Most publishers do not restrict the use of preprints by authors; a summary of publisher policies is here:

https://en.wikipedia.org/wiki/List_of_academic_publishers_by_preprint_policy .

The main downside to consider here is that the peer-review process could lead to substantial modifications or even uncover errors. While you can submit an updated version of the paper to the archive, you cannot remove the previous version from the public record.

What about copyright?

When you upload your paper, data and any other materials to a public archive or repository, you will be given the option to specify a licence for others to use it. This typically involves selecting one from a drop-down menu, but there is a baffling array of options here. For most academic research purposes, the Creative-Commons by attribution (CC-BY) license is appropriate. This is the most permissive level of licensing, giving others the right to use, build on, and adapt your work, as long as they clearly attribute it to you, the creator.

For more on creative commons licenses:  https://creativecommons.org/licenses/

Sharing data

Sharing data conveys transparency, confidence and accountability. Shared data also accelerates research progress. Others may want to replicate your analysis, apply alternative analyses, combine data from multiple labs to refine estimates of effect sizes, use your data as a baseline or prior, or conduct a meta-analysis. Not only is open data good scientific practice, it is now required by many journals and granting agencies.

The UKRI policy on open data:

https://www.ukri.org/apply-for-funding/before-you-apply/your-responsibilities-if-you-get-funding/making-research-data-open/

Best practice for sharing data is share as complete and raw a form as is feasible/reasonable. Data should also be accessible (e.g. avoid data formats that are only readable by proprietary software). Be sure to include a “readme” file that explains how the files are organized and labelled. It is good practice to also include the scripts or templates you used to, for example, apply exclusion criteria, produce summary tables, and apply statistical models.

For more on data management, see the university’s webpages:

https://www.abdn.ac.uk/staffnet/research/data-management-11350

Most researchers share data using the OSF. Instructions for how to upload are here:

https://journals.sagepub.com/doi/full/10.1177/2515245918757689

This includes instructions for how to share a link to a repository only with reviewers and editors, so you can leave your data private until your paper is accepted if you wish. There are many other repositories available. A selection:

 

In psychology, our data mostly comes from human subjects, so whenever we handle, store or share data, it is particularly important to consider data protection rights carefully and abide by all regulations and statutes. UKRI has provided guidance on how to ensure your public data is GDPR compliant here:

https://www.ukri.org/wp-content/uploads/2020/10/UKRI-020920-GDPR-FAQs.pdf

For data containing sensitive of personal information, the UK data service has protocols in place for safeguarding and controlling access.

https://www.ukdataservice.ac.uk/manage-data/legal-ethical/access-control.aspx

Some examples of our open data:

  • Gregory, S. E. A., Langton, S. R. H., Yoshikawa, S., Jackson, M. C. (2020). A cross-cultural investigation into the influence of eye gaze on working memory for happy and angry faces. Cognition and Emotion, 34(8), pp. 1561-1572. Data available at: https://osf.io/qru7g/
  • Hesse, C., Koroknai, L., & Billino, J. (2020). Individual differences in processing resources modulate bimanual interference in pointing. Psychological research, 84(2), 440-453. Data available at: https://zenodo.org/record/1207940#.YGci8z_TWUk
  • Clarke, A.D.F. & Hunt, A.R. (2016). Failure of intuition when choosing whether to invest in a single goal or split resources between two goals. Psychological Science, 27, 64–74. Data available at: https://osf.io/btkjw/
Sharing materials

Sharing your research materials (stimulus sets, experiment protocols, code, images, etc.) will maximize reproducibility, and encourage others to build on your work. When sharing materials, it is important make sure everyone who was involved in producing those materials is happy to have it shared. This can be more complicated than it sounds, because pieces of research code, subroutines, templates and materials tend to get freely and informally shared and passed around between researchers and it can be difficult to trace who was responsible for all the components. If there is any reasonable doubt about the origin of any substantial aspect of the material’s origins, it is safer not to share it on a public repository. For any code you do share, ensure it is well-commented and tidy. Materials such as well-controlled stimulus sets or questionnaires with normative data, and general-purpose statistical models are especially useful for the whole field.

Example:

Clarke, A.D.F., Stainer, M., Tatler, B. & Hunt, A.R. (2017). The saccadic flow baseline: Accounting for image-independent biases in saccadic behaviour. Journal of Vision, 17(11):12. https://github.com/Riadsala/SaccadicBiases/tree/master/scripts/flow_share

Post-publication open science practices

As soon as your paper has been accepted, it is critical to send the final version (before the journal has copy-edited or typeset it) to the library team so they can check the publisher’s access policies and ensure the paper is made available on the university’s public repository when and where possible. More on this here:

https://www.abdn.ac.uk/library/support/2column-page-773-773.php

When you have a finalized reference for your publication, it is helpful to link this reference to any registrations or data you have already publicly shared. If you have shared the manuscript on a public archive, make sure you link to or reference the published article so it is clear the two versions are reporting on the same set of data.

Some research can draw the eye of the general public or have practical applications. Clear and accurate communication about our research findings with the general public is an important aspect of open science. Get in touch with the communications team if you have questions or ideas about how to reach non-academic audiences with your discoveries. https://www.abdn.ac.uk/news/communications/index.php

It is also important to respond promptly and cooperatively if other researchers reach out to request clarification on your publicly shared materials, or to ask for access if you have not already made them public.