Develop proposal for special issue

From Geoscience Paper of the Future
Revision as of 05:34, 8 April 2015 by Xuan (Talk | contribs) ([Yu et al., 2015])

Jump to: navigation, search


Background: Why a Special Issue on Geoscience Papers of the Future?

Include here our discussion for the vision

Background should be 1-2 pages.

Motivated by need to fully document and make research accessible and reproducible.

Motivation: The EarthCube Initiative and the GeoSoft Project

Include here background about GeoSoft from the web site

OSTP memo. EarthCube reports. Other reports that talk about the need for new approaches to editing.

It's possible that small or very large contributions are not well captured in the current publishing paradigms. Nanopublications.

For example, nano-publications are a possible way to reflect advances in a research process that may not merit a full pubication but they are useful advances to share with the community. A challenge here is that there is a stigma in publishing for publishing units that are too small or very small.

Alternatively, a very large piece of research or work with many parts may be better suited to a GPF style publication.


Perhaps, the concept of a 'paper' can be better reflected in the concept of a 'wrapper' or a collection of materials and resources. The purpose is to assure that publications are representative of the work, effort, and results achieved in the research process.

What is a GPF

Include here our discussion of what is a GPF

The challenges of creating GPFs

The articles in this issue reflect the current best practice for generating a Geoscience Paper of the Future.

Figure discussions: Do we want to do exactly the same figure automatically. Figures in the paper may be a clean versions of an image generated by software. To the extent possible, authors have included clear delineations of provenance. The goal is to assure that readers may regenerate the figures using documented workflows, data, and codes. An important note (Allen, Sandra) is that frequently figures are generated by code, scripts, etc. yet the actual figure is finalized with user..... Mimi is trying to say: is it really worth belaboring the point about how the prettified version of the figure is made? If it is: both of the visualization software I've used (Matlab and SigmaPlot) have actual code in the background that specifies how to set up the prettification, and this code can be found, copied out, and rerun to generate the exact same figure with all of the prettification in the same place. SigmaPlot uses Visual Basic (I think) in its macros. If it is an important point about explicit code, this should be doable. But I'm not sure it's strictly necessary to specify exactly where all the prettifications are to get the gist across.

How much of your experimental history does one include? (Ibrahim). The experimental process often ends up nowhere. Should we document all the failed experiments? Get one DOI for the results of the successful experiment? Another for failed trials?


Documenting: Timing and Intermediate proceses When should we document and what are the bounds on what we document? For example, should we document and include data and workflows for 'failed' experiments? Or should we assign datasets DOIs before we know the results from using them? The group thinks that good ideas/practices may include documenting and sharing data when you have a clear understanding of the outcomes worth reporting. For example successful experiments should have clear, clean data documented and shared. Whereas one strategy with 'failed' experiments could include bundling the intermediate datasets with one DOI and a more general discussion of the process/methods.

Related work

Include here the related work we have discussed

Papers to be included

Papers have been broadly categorized according to their main "Challenges" - including Reproducibility (i.e., documenting and reproducing previously published results), Dark Code (i.e., describing and sharing code integral to the presented results), Sharing Big Data (i.e. making available large datasets), and Transferability (i.e., updating a previously-used method to a new version of software, etc.).

For each submission, we describe:

  • Authors and affiliations
  • Keywords of research area
  • Tentative title
  • Short abstract
  • Challenge
  • Relationship to other publications (is the article based on a previously published article? is it new content? IF PREVIOUSLY PUBLISHED, PLS PROVIDE A POINTER TO THE PUBLISHED ARTICLE AND SPECIFY WHAT PERCENTAGE OF THE WORK PRESENTED WILL BE NEW)
  • Pointer to the wiki page that documents the article
  • Expected submission date

[David 2015]

  • Authors and affiliations: Cedric David
  • Keywords of research area: Hydrology, Rivers, Modeling, Testing, Reproducibility.
  • Tentative title: Going beyond triple-checking, allowing for peace of mind in community model development.
  • Short abstract: The development of computer models in the general field of geoscience is often made incrementally over many years. Endeavors that generally start on one single researcher's own machine evolve over time into software that are often much larger than was initially anticipated. Looking at years of building on their computer code, sometimes without much training in computer science, geoscience software developers can easily experience an overwhelming sense of incompetence when contemplating ways to further community usage of their software. How does one allow others to use their code? How can one foster survival of their tool? How could one possibly ensure the scientific integrity of ongoing developments including those made by others? Common issues faced by geoscience developers include selecting a license, learning how to track and document past and ongoing changes, choosing a software repository, and allowing for community development. This paper provides a brief summary of experience with the three former steps of software growth by focusing on the almost decade-long code development of a river routing model. The core of this study, however, focuses on reproducing previously-published experiments. This step is highly repetitive and can therefore benefit greatly from automation. Additionally, enabling automated software testing can arguably be considered the final step for sustainable software sharing, by allowing the main software developer to let go of a mental block considering scientific integrity. Creating tools to automatically compare the results of an updated version of a software with those of previous studies can not only save the main developer's own time, it can also empower other researchers to in their ability to check and justify that their potential additions have retained scientific integrity.
  • Challenge: Reproducibility; Sharing Big Data. Ensure that updates to an existing model are able to reproduce a series of simulations published previously.
  • Relationship to other publications: This research is related to past and ongoing development of the Routing Application for Parallel computatIon of Discharge (RAPID). The primary focus of this paper is to allow automated reproducibility of at least the first RAPID publication. The scientific subject of this GPF differs from the article(s) to be reproduced as its focus is on development of automatic testing methods. In that regard, the paper is expected to be 95% new.
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date:

[Demir 2015]

  • Authors and affiliations: Ibrahim Demir
  • Keywords of research area: hydrological network, optimization, network representation, database query
  • Tentative title: Analysis and Optimization of Hydrological Network Database Representation Methods for Fast Access and Query in Web-based System
  • Short abstract: Web based systems allow users to delineate watersheds on interactive map environments using server side processing. With increasing resolution of hydrological networks, optimized methods for storage of network representation in databases, and efficient queries and actions on the river network structure become critical. This paper presents a detailed study on analysis of widely used methods for representing hydrological networks in relational databases, and benchmarking common queries and modifications on the network structure using these methods. The analysis has been applied to the hydrological network of Iowa utilizing 90m DEM and 600,000 network nodes. The application results indicate that the representation methods provide massive improvements on query times and storage of network structure in the database. Suggested method allows watershed delineation tools running on client-side with desktop-like performance.
  • Challenge: Reproducibility, Transferability; Some of the internal steps to prepare data might require long computation time and different software environments.
  • Relationship to other publications: The article is based on a new study
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date:

[Fulweiler 2015]

  • Authors and affiliations: Wally Fulweiler
  • Keywords of research area:
  • Tentative title:
  • Short abstract:
  • Challenge:
  • Relationship to other publications: (is the article based on a previously published article? is it new content?)
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date:

[Loh and Karlstrom 2015]

  • Authors and affiliations: Lay Kuan Loh and Leif Karlstrom
  • Keywords of research area: Spatial clustering, Eigenvector selection, Entropy Ranking, Cascades Volcanic Region, Afar Depression, Tharsis provonce
  • Tentative title: Characterization of volcanic vent distributions using spectral clustering with eigenvector selection and entropy ranking
  • Short abstract: Volcanic vents on the surface of Earth and other planets often appear in groups that exhibit spatial patterning. Such vent distributions reflect complex interplay between time-evolving mechanical controls on the pathways of magma ascent, background tectonic stresses, and unsteady supply of rising magma. With the ultimate aim of connecting surface vent distributions with the dynamics of magma ascent, we have developed a clustering method to quantify spatial patterns in vents. Clustering is typically used in exploratory data analysis to identify groups with similar behavior by partitioning a dataset into clusters that share similar attributes. Traditional clustering algorithms that work well on simple point-cloud type synthetic datasets generally do not scale well the real-world data we are interested in, where there are poor boundaries between clusters and much ambiguity in cluster assignments. We instead use a spectral clustering algorithm with eigenvector selection based on entropy ranking based off work from Zhao et al 2010 that outperforms traditional spectral clustering algorithms in choosing the right number of clusters for point data. We benchmark this algorithm on synthetic vent data with increasingly complex spatial distributions, to test the ability to accurately cluster vent data with variable spatial density, skewness, number of clusters, and proximity of clusters. We then apply our algorithm to several real-world datasets from the Cascades, Afar Depression and Mars.
  • Challenge: Reproducibility (i.e., Quantifying clustering); We plan to study how varying the statistical distribution, density, skewness, background noise, number of clusters, proximity of clusters, and combinations of any of these factors affects the performance of our algorithm. We test it against man-made and real world datasets.
  • Relationship to other publications: New content, but one of the databases we are studying in the paper (Cascades Volcanic Range) would be based off a different paper we are preparing and planning to submit earlier.
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date: June 2015

[Lee 2015]

  • Authors and affiliations: Kyo Lee, Maziyar Boustani and Chris Mattmann, Jet Propulsion Laboratory
  • Keywords of research area:North American regional climate, regional climate model evaluation system, Open Climate Workbench,
  • Tentative title: Evaluation of simulated temperature, precipitation, cloud fraction and insolation over the conterminous United States using Regional Climate Model Evaluation System
  • Short abstract:This study describes the detailed process of evaluating model fidelity in simulating four key climate variables, surface air temperature, precipitation, cloud fraction and insolation and their covariability over the conterminous United States region. Regional Climate Model Evaluation System (RCMES), a suite of public database and open-source software package, provides both observational datasets and data processors useful for evaluating any climate models. In this paper, we provide a clear and easy-to-follow workflow of RCMES to replicate published papers evaluating North American Regional Climate Change Assessment Program (NARCCAP) regional climate model (RCM) hindcast simulations using observations from variety of sources.
  • Challenge:Big Data Sharing, Dark Code; Sharing big data, better documenting source codes, encouraging climate science community to use RCMES
  • Relationship to other publications: Kim et al. 2013, Lee et al. 2014
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date:End of June 2015

[Miller 2015]

  • Authors and affiliations: Kim Miller
  • Keywords of research area:
  • Tentative title:
  • Short abstract:
  • Challenge:
  • Relationship to other publications: (is the article based on a previously published article? is it new content?)
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date:

[Mills 2015]

  • Authors and affiliations: Heath Mills, University of Houston Clear Lake; Brandi Kiel Reese, Texas A&M Corpus Christi
  • Keywords of research area:
  • Tentative title:Iron and Sulfur Cycling Biogeography Using Advanced Geochemical and Molecular Analyses
  • Short abstract:My paper will develop and document a new pipeline to analyze a combined and robust genetic and geochemical data set. New, reproducible methods will be highlighted in this manuscript to help others better analyze similar data sets. There is a general lack of guidance within my field for such challenges. This manuscript will be unique and helpful from an analysis standpoint as well as for the science being presented.
  • Challenge: Reproducibility; Dark Code
  • Relationship to other publications: Original Manuscript
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date:

[Oh 2015]

  • Authors and affiliations: Ji-Hyun Oh Jet Propulsion Laboratory/University of Southern California
  • Keywords of research area: Tropical Meteorology, Madden-Julian Oscillation, Momentum budget analysis
  • Tentative title: Tools for computing momentum budget for the westerly wind event associated with the Madden-Julian Oscillation
  • Short abstract:As one of the most pronounced modes of tropical intraseasonal variability, the Madden-Julian Oscillation (MJO) prominently connects global weather and climate, and serves as one of critical predictability sources for extended-range forecasting. The zonal circulation of the MJO is characterized by low-level westerlies (easterlies) in and to the west (east) of the convective center, respectively. The direction of zonal winds in the upper troposphere is opposite to that in the lower troposphere. In addition to the convective signal as an identifier of the MJO initiation, certain characteristics of the zonal circulation been used as a standard metric for monitoring the state of MJO and investigating features of the MJO and its impact on other atmospheric phenomena. This paper documents a tool for investigating the generation of low-level westerly winds during the MJO life cycle. The tool is used for the momentum budget analysis to understand the respective contributions of various processes involved in the wind evolution associated with the MJO using European Centre for Medium-Range Weather Forecasts operational analyses during Dynamics of the Madden–Julian Oscillation field campaign.
  • Challenge: Reproducibility, Dark Code; This paper will cover how to reproduce two key figures from the paper that I recently submitted to Journal of Atmospheric Science. This will include detailed procedures related to generating the figures such as how/where to download data, how to transform the format of the data to be used as an input for my codes, and so on..
  • Relationship to other publications: (is the article based on a previously published article? is it new content?) This article is related to the part of the paper submitted to Journal of Atmospheric Science.
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date:

[Pierce 2015]

  • Authors and affiliations: Suzanne Pierce, John Gentle, and Daniel Noll (Texas Advanced Computing Center and Jackson School of Geosciences, The University of Texas at Austin; US Department of Energy)
  • Keywords of research area: Decision Support Systems, Hydrogeology, Participatory Modeling, Data Fusion
  • Tentative title: MCSDSS: An accessible platform and application to enable data fusion and interactive visualization for the Geosciences
  • Short abstract:The MCSDSS application is an advanced example of interactive design that can lead to data fusion for science visualization, decision support applications, and education. What sets the tool apart is its firm underpinning in data, innovative new forms of interface design, and the reusable platform. A key advance is the creation of a framework that can be used to feed new data, videos maps, images, or formats of information into the application with relative ease.
  • Challenge: Reproducibility, Dark Code; Fully document a new software application and framework using example case study data and tutorials; Creation of an interface that enables non-programmers to build out interactive visualizations for their data
  • Relationship to other publications: This article is new content, the proof of concept idea was developed with DOE funding for a student competition and resulted in an initial implementation that was reported in the DOE competition report and a masters thesis for co-author Daniel Noll
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date: mid- to late June 2015

[Pope 2015]

  • Authors and affiliations: Allen Pope, National Snow and Ice Data Center, University of Colorado, Boulder
  • Keywords of research area: Glaciology, Remote Sensing, Landsat 8, Polar Science
  • Tentative title: Data and Code for Estimating and Evaluating Supraglacial Lake Depth With Landsat 8 and other Multispectral Sensors
  • Short abstract: Supraglacial lakes play a significant role in glacial hydrological systems – for example, transporting water to the glacier bed in Greenland or leading to ice shelf fracture and disintegration in Antarctica. To investigate these important processes, multispectral remote sensing provides multiple methods for estimating supraglacial lake depth – either through single-band or band-ratio methods, both empirical and physically-based. Landsat 8 is the newest satellite in the Landsat series. With new bands, higher dynamic range, and higher radiometric resolution, the Operational Land Imager (OLI) aboard Landsat 8 has a lot of potential.

This paper will document the data and code used in processing in situ reflectance spectra and depth measurements to investigate the ability of Landsat 8 to estimate lake depths using multiple methods, as well as quantify improvements over Landsat 7’s ETM+. A workflow, data, and code are provided to detail promising methods as applied to Landsat 8 OLI imagery of case study areas in Greenland, allowing calculation of regional volume estimates using 2013 and 2014 summer-season imagery. Altimetry from WorldView DEMs are used to validate lake depth estimates. The optimal method for supraglacial lake depth estimation with Landsat 8 is shown to be an average of single band depths by red and panchromatic bands. With this best method, preliminary investigation of seasonal behavior and elevation distribution of lakes is also discussed and documented.

  • Challenge: Reproducibility, Dark Code
  • Relationship to other publications: Documenting and explaining the data and code behind the analysis and results presented in another paper.
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date: Late June 2015

[Read and Winslow 2015]

  • Authors and affiliations: Jordan Read and Luke Winslow
  • Keywords of research area:
  • Tentative title:
  • Short abstract:
  • Challenge:
  • Relationship to other publications: (is the article based on a previously published article? is it new content?)
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date:

[Tzeng 2015]

  • Authors and affiliations: Mimi Tzeng, Brian Dzwonkowski (DISL); Kyeong Park (TAMU Galveston)
  • Keywords of research area:physical oceanography, remote sensing
  • Tentative title: Fisheries Oceanography of Coastal Alabama (FOCAL): A Subset of a Time-Series of Hydrographic and Current Data from a Permanent Moored Station Outside Mobile Bay (27 Jan to 18 May 2011)
  • Short abstract:The Fisheries Oceanography in Coastal Alabama (FOCAL) program began in 2006 as a way for scientists at Dauphin Island Sea Lab (DISL) to study the natural variability of Alabama's nearshore environment as it relates to fisheries production. FOCAL provided a long-term baseline data set that included time-series hydrographic data from a permanent offshore mooring (ADCP, vertical thermister array and CTDs at surface and bottom) and shipboard surveys (vertical CTD profiles and water sampling), as well as monthly ichthyoplankton and zooplankton (depth-discrete) sample collections at FOCAL sites. The subset of data presented here are from the mooring, and includes a vertical array of thermisters, CTDs at surface and bottom, an ADCP at the bottom, and vertical CTD profiles collected at the mooring during maintenance surveys. The mooring is located at 30 05.410'N 88 12.694'W, 25 km southwest of the entrance to Mobile Bay. Temperature, salinity, density, depth, and current velocity data were collected at 20-minute intervals from 2006 to 2012. Other parameters, such as dissolved oxygen, are available for portions of the time series depending on which instruments were deployed at the time.
  • Challenge: Dark Code, Reproducibility; My paper will be about the processing of data in a larger dataset, from which peer-reviewed papers have been written. The processing I did was not specific to any particular paper. I can point to an example paper that used some of the data from this dataset, that I processed, however all of the figures in the paper are composites that also include other data from elsewhere that I had nothing to do with (and it wouldn't be feasible to try to get hold of the other data within our timeframe).
  • Relationship to other publications: A recent paper that used the part of the FOCAL data I'm documenting as the sample from the larger dataset: Dzwonkowski, Brian, Kyeong Park, Jungwoo Lee, Bret M. Webb, and Arnoldo Valle-Levinson. 2014. "Spatial variability of flow over a river-influenced inner shelf in coastal Alabama during spring." Continental Shelf Research 74:25-34.
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date:

[Villamizar 2015]

  • Authors and affiliations: Sandra Villamizar, University of California, Merced
  • Keywords of research area: river ecohydrology
  • Tentative title: Producing long-term series of whole-stream metabolism using readily available data.
  • Short abstract: Continuous water quality and river discharge data that are readily available through government websites may be used to produce valuable information about key processes within a river ecosystem. In this paper I describe in detail the steps for acquisition and processing of river flow, dissolved oxygen, temperature, and specific conductance data that, combined with atmospheric data and physical properties of the river reach of interest, allow for the production of a long-term series of whole stream metabolism. This information is key in understanding the structure and function of an ecosystem such as the San Joaquin River in the Central Valley of California which has been increasingly degraded during the last 60 years due to intensive human intervention but now, since 2010, has been going through a restoration effort. The key advantage of this tool is that it uses readily available information to produce knowledge about a river ecosystem. This set of scripts, written in the R code, can be used immediately for any other river for which the key parameters (river flow, dissolved oxygen, temperature, and specific conductivity) are available. The scripts can also be modified by users to fit their particular site conditions.
  • Challenge: Reproducibility; Dark Code; Document new software/applications. This set of scripts was written after the necessity of generating daily estimates of metabolic rates for long periods of time and at various sites within the San Joaquin River.
  • Relationship to other publications: This will be a new publication
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date: To be defined

[Yu et al., 2015]

  • Authors and affiliations: Xuan Yu, Department of Geological Sciences, University of Delaware.

Gopal Bhatt, Department of Civil & Environmental Engineering, Pennsylvania State University. Alain N. Rousseau, Institut National de la Recherche Scientifique (Centre Eau, Terre et Environnement), Université du Québec, 490 rue de la Couronne, Québec City, QC, Canada, G1K 9A9. Alvaro Pardo Alvarez, Institut National de la Recherche Scientifique (Centre Eau, Terre et Environnement), Université du Québec, 490 rue de la Couronne, Québec City, QC, Canada, G1K 9A9.

  • Keywords of research area: coupled processes, integrated hydrologic modeling, PIHM, surface flow, subsurface flow, open science
  • Tentative title: Learning integrated modeling of coupled surface and subsurface hydrology from scratch
  • Short abstract: Integrated modeling of coupled surface and subsurface flow has been of great interest in understanding not only intimate interconnectedness of hydrological processes, but also land-surface energy balance, biogeochemical and ecological processes, and landscape evolution. Although a growing number of complex hydrologic models have been used for resolving environmental processes, hypothesis testing, hydrologic predictions for effective management of watershed, very limited resources of the model provenance have been made accessible to a large group of potential model users. The users have to invest a significant amount of time and effort to reproduce, and understand the workflow of hydrologic simulation in a modeling paper. To provide a challenging and stimulating introduction to integrated modeling of coupled surface and subsurface flow, we revisit the development of PIHM (Penn State Integrated Hydrologic Model) by reproducing a numerical benchmarking example, and a real watershed application. Specifically, we document PIHM and its modeling workflow to enable basic understanding of simulating coupled surface and subsurface flow processes. We provide model and data to highlight the reciprocal roles between each other. In addition, we incorporate user experience as third dimension in the modeling workflow to enable clear provenance and deeper communications between model developers and users. The workflow has important implications for smoothing and accelerating open scientific collaborations in geosciences research.
  • Challenge: Reproducibility; Reproduce published simulations by a existing model with the latest version. Benchmarking modeling application for numerical experiment and field data.
  • Relationship to other publications: The article is based on a previously published article.
  • Pointer to the wiki page that documents the article: Page
  • Expected submission date: End of June 2015

Special Issue Editors

  • Co-editor: Chris Duffy and/or Scott Peckham
  • Co-editor: Cedric David
  • Co-editor: possibly Karan Venayagamoorthy

The editors will only accept submissions that follow the special issue review criteria.

The editors will select a set of reviewers to handle the submissions. Reviewers will include computer scientists, library scientists, and geoscientists.

Special Issue Review Criteria

The reviewers will be asked to provide feedback on the papers according to the following criteria. Note that some papers will have good reasons for limiting the information (e.g. the data is from third parties and not openly available, etc), and in that case they would document those reasons.

  • Documentation of the datasets: descriptions of datasets, unique identifiers, repositories.
  • Documentation of software: description of all software used (including pre-processing of data, visualization steps, etc), unique identifiers, repositories.
  • Documentation of the provenance of results: provenance for each figure or result, such as the workflow or the provenance record.

Tentative Timeline

  • Journal committed to special issue: April 15, 2015
  • Submissions due to editors: June 30, 2015
  • Reviews due: Sept 15, 2015
  • Decisions out to authors: Sept 30, 2015
  • Revisions due: October 31, 2015
  • Final versions due November 15, 2015
  • Issue published December 31, 2015