By Sheeva Azma
In February 2024, I was lucky to virtually attend the 2024 AAAS Conference for free as a journalist affiliated with the National Association of Science Writers. Since I am a science policy nerd, I decided to attend all the science policy sessions I could to share them in 2024 AAAS annual meeting recap blogs. I am a dedicated AAAS fangirl; you can read all of my writing about various AAAS seminars and meetings I’ve attended here.
This session, titled “Open Science: From Policy to Practice” was packed with information on ways to implement and track open science and incentivize research sharing in an “open science” format.

Since 2023, all science at NASA is publicly funded and publicly available
The first speaker was Jose Luis Galache of the Open Source Science Initiative at NASA Headquarters in Washington, DC, kicking off the panel discussion with a talk about how NASA does open science. He started by defining open science and highlighting inherent challenges: “when you do science with data and software that is easy to find, access, and use, and everyone who wants to can use it and do so securely, but one does not simply do open science…there are many barriers.” Some of the challenges relate to data, software, and computation; others relate to the use of a shared storage space in the cloud, and who pays for that.
At NASA, Galache explained, OSSI is built on four pillars: policy; core data and computing services; open science incentives, such as grants and setting up cooperative agreements; and community engagement, such as NASA’s Transform to Open Science or TOPS program, which seeks to train scientists and researchers in open science and, more broadly, “transform agencies, organizations, and communities to an inclusive culture of open science.”
Galache stated that even though researchers don’t always like policy, he began to embrace it after joining NASA. “Policy is a good thing since it tells people what they are expected to do and that’s very important and it can tell them how to do that as well,” says Galache. NASA policy, specifically, a policy called SPD-41a, states that all NASA-funded researchers, as of 2023, must make their data public no later than when they are publishing results. The manuscript must also be freely available since it is funded by NASA and therefore, paid for by taxpayers. People who publish behind paywalls must make preprints available.
NASA’s TOPS program has a five-module course, “Open Science 101,” which anyone can take on their own time to get a badge and certificate to put on LinkedIn or ORCID.
HHMI is rethinking research culture’s focus on recognition
Next to speak was Anna Hatch, Program Officer for Science Strategy at Howard Hughes Medical Center (HHMI). With a background in cell and molecular biology, Hatch became interested in the role information sharing plays in the advancement of research worldwide. Her work seeks to evaluate research culture by examining recognition and reward systems in academia. Her work on research assessment led to HHMI decreasing reliance on journal impact factors, which seek to numerically rank journals based on “relative importance.”
The way scientists evaluate research matters, Hatch says. This includes how evaluation is done, how it is shared, who gets to do research, and so on. The questions in research evaluation are, Hatch says: “Are we measuring what we value? What do we value?”
Impact factors and other numerical metrics of scientific productivity such as number of papers published “feel objective,” stated Hatch, but they provide a “false sense of objectivity” as “overrelying on them masks important contributions.” Journal names tell you about the journal, not about researchers publishing in that journal, and citations are collected over time, so more senior researchers will have a higher citation count.
Hatch stated that relying too heavily on journal name and citation indicators lead to “unignorable consequences.” Seeking to publish in prestigious journals delays graduate students’ and postdocs’ careers because they have to collect more data, which lengthens training time for grad students, which she describes as “demoralizing.” As someone who has been in those shoes, I can also say that it wastes time and money.
HHMI is engaged in work to deemphasize journal name. All of HHMI research is on PubMed, and has a PubMed-ID (PMID) there, so this started out with asking scientists to use PMID rather than journal name (or DOI for preprints, though some DOIs include journal names, so this was an unideal option). Once they did that, they got positive feedback, so the next step was to remove journal names from posters at science meetings. That was a hit, too, so they then removed journal names from bibliographies. The current citation format, Hatch stated, includes the authors, the title of journal article, the year it was published, and DOI (for preprints) or PMID (which HHMI prefers, for journal articles).
Moving past journal names as markers of scientific rigor means a lot in the context of open science, according to Hatch. If we value open science and want to encourage it, we must think about the reward and incentive systems by which it operates, as well as indicators to be used in open science research assessment. Indicators have limitations, especially if they are qualitative; there are some things the indicators can measure, but other things they cannot. Knowing what cannot be measured helps inform the limitations of the assessment and help researchers look for more useful metrics to capture the missing information.
It’s okay to use imperfect indicators as long as you know where they do not measure up, says Hatch. This paints a more holistic picture of research performance. Lastly, she states, in open science, open research assessment data infrastructure is key to use indicators; underlying metadata should also be open and transparent to make sure that it is auditable.
Breaking down barriers to open science
Tim Vines of DataSeer spoke next about “hidden walls” in open science which are “hard to quantify and also…have profound effects on the structure and the way the community goes forward, the way it goes about the business of science.” He contrasted the “hidden walls” with more obvious walls such as paywalls preventing access to a journal article.
A subtle and hidden wall that affects incentives for scientists to share their data relates to the way that open science datasets are credited – they are often not credited, Vines has learned through his research. He partnered up with Cite.AI that classifies citations for a 2,000 citation list. It provided citation statements for these articles, but there are citations that do not have the dataset listed, and people are not credited for reusing datasets. It looks like data reuse is rare in the literature, but it is actually common but people are not crediting the sources of their datasets. In one case, a research paper was missing about half the dataset citations they should have gotten for sharing their dataset. This poses a problem, because if it doesn’t look like data use is common, there is less of an impetus to support open science and companies like Dryad which make open science possible.
Changing author behavior is one way to fix this, but you can also use natural language processing to look for who might be using open source data but not citing it properly from its original source. People want to know the impact of their work, so this is a huge barrier to open science and relates to incentivizing and rewarding open science by crediting people.
Facilitating open science collaborations at MIT
The last speaker was Chris Bourg of MIT Libraries, who is also Founding Director of Center for Research on Equitable and Open Scholarship (CREOS) within MIT Libraries. Bourg starts by recapping Vines’ speech: “If people had more citations to their data, they would be more motivated to share more data next time.” She states that this is a hypothesis that needs to be tested. If we want to have open science that works, we have to go beyond personal hunches and anecdotes and test hypotheses, but we’re not there yet, Bourg says. We need a more scientific approach to open science – from practices to policies to impacts.
Bourg is tackling this challenge through CREOS, which funds researchers, holds seminars, and more. CREOS works with MIT faculty across many disciplines to fund open science research, provide data management support, help with contract negotiations with publishers, and more. CREOS sets open science research priorities that look at individuals’ and institutions’ and publishers’ barriers to open science. “We’d like to be a place where we can start to generate research on that topic,” she says. The economics of data sharing and open science is also an area of interest for CREOS. If one is not going to charge for content in a journal, for example, the revenue model requires evaluation.
What will a rigorous science of open science do for us? Developing common methodologies and measures, and establishing a body of evidence to inform policy. OSTP guidance is coming for open science, and we need to make sure that this does that it sets out to do, Bourg says. She questions the idea that open science leads to equity since it is a hypothesis that has not been tested. She asks: can we create an open science framework that is really more equitable? She says that we cannot know the answer to that yet, since this is currently an anecdote: a hypothesis that lacks rigorous testing.
Beyond the rigorous evaluation of open science to advance open science itself, being able to do open science and evaluate and read open science is a useful skill set for scientists, Bourg says.
Looking to an open science future
After the panelists spoke, participants stepped up to the microphone to ask questions. One participant suggested that, in the age of open science, perhaps the academic paper is not the fundamental unit of science anymore. This provides a whole new framework to look at scientific communication, she says.
Another participant asked how to rely less on citations (per Hatch’s talk) while also citing people properly (per Vines’s talk). Hines chimed in that if a dataset doesn’t have a lot of citations, but it has been used in impactful ways, “that’s still valuable.” She mentions the topic of “Narrative CVs,” in which you can highlight these indicators to give a holistic view beyond the numbers. Bourg chimed in that “citations are a beautiful thing that show the progression of knowledge,” framing them as a “beautiful” narrative in and of themselves.
According to the panelists, early career researchers can contribute to this effort without hindering their job chances, but that it requires some careful thought and introspection on one’s career and goals. Hughes, who is leading HHMI’s effort to get rid of journal names as a metric, says that not just relying on citation-based indicators is a huge step, but that you have to “work within your comfort zone” on that. Instead of highlighting journal names, you can highlight why the publication was important in a couple of sentences, as others has done, she suggests. The next phase of this is making peer reviews publicly available, Hughes says. Bourg suggests that finding a mentor who also supports open science can help early career researchers interested in science to succeed.
Upskill and learn about science policy with Fancy Comma, LLC’s FREE resources
Read all of of our AAAS 2024 Annual Meeting recap blogs here! Feel free to also check out checking out Fancy Comma’s FREE resources on science communication and science policy.
2 thoughts on “Improving Open Science”