Download PDF Download attached files



Systematic Map Protocol

Title
What evidence exists on the ecological and physical effects of built structures in shallow, tropical coral reefs? A systematic map protocol

Citation:
Avery Paxton, Todd Swannack, Candice Piercy, Safra Altman, Leanne Poussard, Brandon Puckett, Curt Storlazzi, Shay Viehman. What evidence exists on the ecological and physical effects of built structures in shallow, tropical coral reefs? A systematic map protocol: a Systematic Map Protocol. PROCEED-23-00152 Available from:
https://proceedevidence.info/protocol/view-result?id=152
https://doi.org/10.57808/proceed.2023.20

Corresponding author’s email address
avery.paxton@noaa.gov

Keywords
artificial structures; coastal protection; coastal restoration; eco-engineering; nature-based solutions;

Background
Despite the history and increasing consideration of built structures for coral restoration and related applications like environmental mitigation and coastal protection, questions remain regarding how built structures should be considered in management and restoration decisions. Central to these questions is that the global evidence base regarding the use and performance of built structures has not been collated or synthesized; but, see syntheses for particular contexts, such as artificial reefs (Higgins, Metaxas and Scheibling 2022), substrate stabilization (Ceccarelli et al. 2020), and 3D technology for reef structures (Levy et al. 2022). The lack of broadly synthesized evidence presents barriers to implementing management and policy decisions regarding future use of built structures in coral reef systems. Without synthesized evidence, it is challenging for decision makers to rigorously and reproducibly evaluate whether built structures may be appropriate tools in particular environmental settings and use-case scenarios. The goal of this study is to collate evidence on the ecological and physical performance of built structure interventions in shallow, tropical coral reef settings. This synthesis of knowledge will help inform practice for built structure design and implementation, including as nature-based solutions that can help address societal and ecological challenges. Because built structures have been used for multiple applications related to tropical coral reefs, such as for restoration, coastal protection and environmental mitigation, we will include evidence from these diverse bodies of literature. This will ensure that our synthesis stems from the most comprehensive body of relevant literature and will also help ensure that findings from our synthesis can be used to help guide management decisions regarding the design, siting, and implementation of gray-green infrastructure in coral reef settings.

Theory of change or causal model
N/A

Stakeholder engagement
This project was jointly conceptualized by scientists from the National Oceanic and Atmospheric Administration (NOAA) National Centers for Coastal Ocean Science (NCCOS), the U.S. Army Corps of Engineers (USACE) Engineering with Nature (EWN) Program, and the U.S. Geological Survey (USGS) Coastal and Marine Hazards and Resources Program (CMHRP) to synthesize how built structures have been used in a variety of contexts, such as those related to coral restoration, coastal protection, and environmental mitigation. The motivation for the synthesis was to catalog uses of and ecological and physical performance outcomes associated with built structures in shallow, tropical coral reef settings to help inform hybrid or gray-green reef structure design, siting, implementation, and potentially policy decisions. The core team of scientists from NOAA, USACE, and USGS scoped the systematic map and developed the search strategy based on stakeholder needs.

Objectives and review question
The objective of this systematic map is to document the global evidence base on the performance (ecological and physical) of built structures in shallow, tropical coral reef settings. The systematic map also aims to summarize how evidence differs by built structure qualities, such as the type and material of intervention, as well as the goal and seascape setting. Question: What is the distribution and abundance of evidence on the ecological and physical performance of built structures in shallow, tropical coral reef systems?

Definitions of the question components
Population: Coral reefs located in shallow, tropical coastal environments (< / = 30 m, 35 degrees N to 35 degrees S latitude) Intervention: Built structures of human-made, hybrid, or natural origin established in coral systems Comparator: No comparator required beyond presence of built structure intervention. Studies that include a comparator (presence vs. absence of built structure intervention, before vs. after built structure intervention, different types of built structure interventions, etc.) will also be included. Outcome: Ecological (coral related) or physical (e.g., waves, current, flooding) performance outcomes associated with built structure intervention. Study type: Experimental, quasi-experimental, observational, or modeling studies with quantitative data on ecological or physical outcomes associated with the intervention. Studies can be conducted in the field or lab settings.

Search strategy
A search for primary literature, including peer-reviewed articles and gray literature will be performed using multiple indexing platforms, bibliographic databases, organizational websites, and other search platforms. There are no temporal constraints on the search. The geographic scope for the search is global because coral reef degradation and loss is a global issue (Eddy et al. 2021). Searches will be performed in English, and articles without a full text published in English will be documented and excluded. We made the decision to restrict the search to English language due to resource constraints and recognize that this introduces bias to the systematic map. We developed six search substrings that will be combined into one string as follows: Population: (coral reef substring) AND Intervention: (built structure substring AND context for built structure intervention substring) AND Outcome: (ecological outcome substring OR physical outcome substring) Searches for relevant primary literature will be performed in indexing platforms, bibliographic databases, open discovery citation indexes, and a web-based search engine. See Additional File 2 for additional details, including search strings.

Bibliographic databases
Web of Science - Indexes: SCI-Expanded (1980-Present), SSCI (1980 - Present), CPCI-S (1990 - Present), CPCI-SSH (1990 - Present), ESCI (2018 - Present) - Subscription: Duke University - Document type: Article, Proceedings Paper, Early Access, Data paper Scopus - Indexes: N/A - Subscription: Duke University ProQuest - Indexes: Aquatic Sciences and Fisheries Abstracts; Meteorological and Geoastrophysical Abstracts; Earth, Atmospheric, & Aquatic Sciences Database; Oceanic Abstracts - Subscription: Duke University - Source type: Scholarly Journals, Dissertations & Theses, Conference Papers & Proceedings, Reports LENS - Indexes: CORE; Crossref; PubMed; Microsoft Academic - Subscription: N/A - Document type: Journal Article, Conference Proceeding Article, Conference Proceedings, Dissertation, Report Dimensions - Indexes: N/A - Subscription: N/A - Publication type: Article, Proceedings

Web-based search engines
The web-engine search will be performed using Google Scholar via Harzing’s Publish or Perish Software (Harzing 2007). The search string used for Google Scholar will be adapted to meet syntax limitations of the platform, will be performed on title only, and will be restricted to the first 1,000 results (Haddaway et al. 2015). Searches will also be performed using Inciteful (https://inciteful.xyz/), a novel literature discovery tool, for up to the first 1,000 similar results (Weishuhn 2022). Specifically, the search will be seeded with benchmarking articles; no search string is required for Inciteful.

Organisational websites
Twenty organizational websites will also be searched for evidence. The organizations span government agencies, nonprofit organizations, and other entities that report on the use of built structures in coral reef ecosystems. Most organizational websites do not permit Boolean searches, so search strings will be by hand and details of how searches were implemented will be documented. Gray literature will be screened in situ, and up to 100 results per organizational website will be screened. Conservation International https://www.conservation.org/ Coral Reef Alliance https://coral.org/en/ Florida Department of Environmental Protection https://floridadep.gov/ Global Coral Reef Alliance https://www.globalcoral.org/ International Union for Conservation of Nature https://www.iucn.org/ National Oceanic and Atmospheric Administration https://www.noaa.gov/ Sea Grant https://seagrant.noaa.gov/ Reef Base http://reefbase.org/ The Nature Conservancy https://www.nature.org/ United Nations Decade on Restoration https://www.decadeonrestoration.org/ United Nations Development Programme https://www.undp.org/ United Nations Environment Programme https://www.unep.org/ United Nations Environment Programme World Conservation Monitoring Center https://resources.unep-wcmc.org/ U.S. Army Corps of Engineers https://www.usace.army.mil/ U.S. Geological Survey https://www.usgs.gov/ U.S. Fish and Wildlife Service https://www.fws.gov/ Wildlife Conservation Society https://library.wcs.org/ World Bank https://www.worldbank.org/ World Resources Institute https://www.wri.org/ World Wildlife Fund https://www.worldwildlife.org/

Comprehensiveness of the search
The evidence synthesis team identified 21 benchmarking articles to test against the search string. These benchmarking articles were sourced from subject matter experts, including those from the core research team. Search strings were tested in Web of Science, and 18 of the 21 articles were indexed in Web of Science (e.g., 3 articles were not indexed in Web of Science meaning that they are not part of the Web of Science collection and so will not be found in Web of Science regardless of the search string used). Search strings were adjusted incrementally until all but two of the 18 indexed articles were identified. The two articles that were unable to be identified in Web of Science did not include terms related to the intervention in the title or abstract. These articles had been provided by the synthesis team because they had case studies embedded within that used built structures but were deemed undetectable in the search since the intervention was not covered in the title and abstract. See Additional File 3.

Search update
Not applicable.

Screening strategy
Articles returned from literature searches will be screened against eligibility criteria in two stages, first by title and abstract and second by full text. The software Swift-Active Screener will be used for title and abstract screening because it utilizes a combination of screener feedback and a type of machine learning termed active learning (Howard et al. 2020). Screening will occur until the software’s “recall rate” reaches 95% (Howard et al. 2020). Screeners will indicate in Swift whether articles should be included or excluded based on the eligibility criteria. Articles that pass title and abstract screening will be screened at the full text stage to determine whether they still meet eligibility criteria and should be included in the study or not. If the full text for an article cannot be located, the article will be excluded. Exclusion rationale will be documented during both screening stages. Screeners will be trained on how to reproducibly conduct both screening stages. Training will occur in dedicated training sessions where select articles are screened as a group before select additional articles are screened individually. Inconsistencies in screening decisions will be discussed and used to refine eligibility criteria. Once screeners are trained, quantitative assessments of inter-reviewer consistency will be conducted by generating Kappa statistics or percentage agreement values for all pairs of reviewers for a set of 100 randomly selected titles and abstracts.

Eligibility criteria
Inclusion criteria for population, intervention, comparator, outcome, and study type follow. See Additional File 5 for full criteria. Population: Coral reefs in nearshore, shallow water depths (<= 30 m) in tropical latitudes (35oN to 35oS) where built structure interventions occur. Coral reefs created by or facilitated by a built structure intervention in a location devoid of reefs (e.g., intervention on soft sediment that creates or is intended to create reef) can be included. Intervention: Interventions must use a built structure. Built structures may include those of: 1) Artificial or human-made origin, including structures engineered or designed for reef contexts with or without electricity, structures repurposed from their primary use, and those structures created as artwork; 2) Hybrid origin that are created from a combination of artificial and natural material, such as cement plus natural rock; 3) Natural origin from geologic sources, such as mined rock, limestone, boulders. Comparator: No comparator is required because the only requirement is the presence of built structure. Studies that include a comparator, however, will also be included. Comparators may include: presence vs. absence of built structure intervention, before vs. after built structure intervention, etc. Outcome: Ecological and physical performance outcomes of built structure interventions that are measured, observed, or modeled. Ecological outcomes must relate to coral and coral reef metrics, such as recruitment, growth, morality, condition, rugosity, and cover. Physical outcomes must relate to waves, currents, erosion, flooding, and other coastal processes Study type: Experimental, quasi-experimental, modeling (statistical, theoretical), or observational studies with quantitative data.

Consistency checking
Double screening will be conducted for up to 5% of the title and abstract or full text screening stages.

Reporting screening outcomes
Reference management will be conducted using Clarviate’s EndNote (version 20) citation management software (The EndNote Team 2020). RIS files from searches implemented on different platforms (e.g., indexing platforms, bibliographic databases) will be uploaded separately to EndNote and references deduplicated using built-in EndNote functions and open-source tools, such as the R package ‘CiteSource’ (Riley et al. 2022). Reference metadata will be checked and fixed as needed. Cleaned references will be combined into one .RIS file and uploaded to the title and abstract screening software, Swift-Active Screener (Sciome LLC; Howard et al. 2020), for review. Following review of title and abstracts, updated .RIS files of included and excluded articles will be exported from the screening software. The .RIS file corresponding to articles that passed title and abstract screening will then be imported to EndNote for full text screening. Screeners will use EndNote to review references during full text screening and will track reference inclusion and conduct metadata coding using Google spreadsheets. RIS records of included and excluded articles will be kept for ROSES reporting. See ROSES in Additional File 1.

Study validity assessment
Study validity will not be systematically assessed because this is a systematic map which aims to collate and summarize the distribution and abundance of evidence. During data coding, attributes will be extracted that can be used for follow-up assessments of study validity for subsets of the evidence base.

Consistency checking
Screeners will be trained to code metadata reproducibility during a training session. The training session will focus on collectively coding data for several articles. Each screener will then be assigned a subset of articles to code independently. Coding results will be compared qualitatively and the group will discuss inconsistencies and alter attributes and instructions if necessary. Double data extraction, which is the extraction of data from a study by multiple screeners, will not be conducted because of the high number of anticipated articles that will require data coding. Instead, we will conduct spot checks for a percentage of articles. The percentage of articles that we spot check in the systematic map will be reported.

Data coding strategy
Metadata attributes from studies that adhere to eligibility criteria will be entered into a data “coding” spreadsheet. The attributes will include bibliographic information, as well as those related to the population, intervention, study type, comparator, and outcome. For example, intervention attributes will include the type of built structure intervention, the structure material, proprietary name (if applicable), policy-relevant term, and description of the coral restoration context. Details on each metadata attribute are provided in a code book adapted from a code book used in Paxton et al. (2023). The code book provides a description of each attribute, instructions for data entry, and levels of categorical attributes that screeners can select from dropdown menus. We do not plan to contact authors to request missing information. Rather, if the required information is not stated in the article, it will be coded as “unknown.” If an attribute is not applicable to an article, the attribute will be coded as “not applicable.”

Meta-data to be coded
Metadata attributes planned for extraction during data coding. These attributes will be extracted from articles that pass title and abstract screening, as well as full text screening. Attributes are categorized to encompass bibliographic information, population information, intervention information, etc. Outcome attributes, such as outcome category and subcategory, outcome description, etc. will be repeated for each outcome (e.g., coral condition, waves). Additional details provided in Additional File 4.

Consistency checking
Screeners will be trained to code metadata reproducibility during a training session. The training session will focus on collectively coding data for several articles. Each screener will then be assigned a subset of articles to code independently. Coding results will be compared qualitatively and the group will discuss inconsistencies and alter attributes and instructions if necessary. Double data extraction, which is the extraction of data from a study by multiple screeners, will not be conducted because of the high number of anticipated articles that will require data coding. Instead, we will conduct spot checks for a percentage of articles. The percentage of articles that we spot check in the systematic map will be reported.

Type of mapping
Metadata from studies that meet eligibility criteria at both the title and abstract and full text screening stages will be analyzed to identify patterns in the distribution and abundance of evidence related to the use of built structures in coral restoration and related applications. Analyses will be conducted in R (R Development Core Team 2022) to answer the posed primary and secondary research questions, characterize the evidence base, and identify both evidence clusters and evidence gaps. Specifically, the extent of evidence on different types of built structure interventions by their typology, material, proprietary name, and policy relevant term will be characterized. Similarities and differences in the evidence base according to the context that the built structure intervention was intended, such as coral restoration, environmental mitigation, and coastal protection, will be identified. The abundance and distribution of evidence across ecological and physical outcomes for which built structures have been evaluated, as well as for study setting – geographic region, spatial scale, and seascape environment – will be cataloged. When feasible, the directionality of evidence (e.g., positive, negative, neutral) will be documented. Evidence clusters and gaps will be identified with heat maps displaying the number of studies corresponding to cross-tabulated attributes.

Narrative synthesis methods
Findings will be compiled into an evidence map for peer-reviewed publication that will include a narrative summary of the evidence base. This state of the science review will be complemented by visual depictions of the evidence base using heat maps, bar plots, and geographic distribution maps. Tabular summaries of findings may also be included.

Knowledge gap identification strategy
The systematic map will emphasize the discovery of evidence clusters and gaps, and suggest potential avenues for future research. Map findings may be applied to help improve practice and help inform policy and management decisions regarding the potential use of built structures in tropical, shallow coral reefs. Map findings will also inform systematic reviews on the quantitative effectiveness of built structures. All data on included and excluded literature and associated metadata will be made publicly available.

Demonstrating procedural independence
Screeners cannot screen articles for which they were an author or coauthor.

Competing interests
Not applicable.

Funding information
This study was supported by the NOAA National Centers for Coastal Ocean Science and the USACE Engineering With Nature Program®, and the USGS Coastal and Marine Hazards and Resources Program.

Author’s contributions
SV and TS acquired funding for the synthesis. All coauthors conceptualized the project scope. ABP developed search strings with feedback from coauthors. AP developed the protocol, including the search strategy, article screening and eligibility criteria, data extraction and coding strategy, and the study mapping and presentation vision. AP and SV drafted the background section of the protocol. AP drafted all other sections of the protocol. All authors helped refine the systematic map protocol scope, methods, and manuscript. All authors read, reviewed, and approved the final manuscript.

Acknowledgements
We thank the NOAA National Centers for Coastal Ocean Science and the US Army Corps of Engineers Engineering with Nature for supporting the protocol. We thank Trevor Riley from the NOAA Central Library for reviewing the search string and syntax. We thank T. Barnes, C. Steenrod, and K. Yates for thoughtful reviews of the manuscript. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the opinions or policies of NOAA and USACE. The mention of trade names or commercial products does not constitute U.S. Government endorsement or recommendation for use.

References
Eddy, T. D., V. W. Y. Lam, G. Reygondeau, A. M. Cisneros-Montemayor, K. Greer, M. L. D. Palomares, J. F. Bruno, Y. Ota, and W. W. L. Cheung. 2021. Global decline in capacity of coral reefs to provide ecosystem services. One Earth 4:1278-1285. Haddaway, N. R., A. M. Collins, D. Coughlin, and S. Kirk. 2015. The role of Google Scholar in evidence reviews and its applicability to grey literature searching. PLoS One 10:e0138237. Harzing, A. W. 2007. Publish or perish. https://harzing.com/resources/publish-or-perish. Howard, B. E., J. Phillips, A. Tandon, A. Maharana, R. Elmore, D. Mav, A. Sedykh, K. Thayer, B. A. Merrick, V. Walker, A. Rooney, and R. R. Shah. 2020. SWIFT-Active Screener: Accelerated document screening through active learning and integrated recall estimation. Environment international 138:105623. Paxton, A. B., T. N. Riley, C. Steenrod, L., C. S. Smith, Y. S. Zhang, R. K. Gittman, B. R. Silliman, C. A. Buckel, T. S. Viehman, B. J. Puckett, and J. Davis. 2023. What evidence exists on the performance of nature-based solutions interventions for coastal protection in biogenic, shallow ecosystems? A systematic map protocol. Environmental Evidence 12:1-25. R Development Core Team. 2022. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. Riley, T., K. Hair, L. Wallrich, M. Grainger, S. Young, C. Pritchard, and N. Haddaway. 2022. CiteSource: Analyze the utility of information sources and retrieval methodologies for evidence synthesis. The EndNote Team. 2020. EndNote. Clarivate, Philadelphia, PA.


Authors and Affiliations
Name Country Affiliation
Avery Paxton United States NOAA National Centers for Coastal Ocean Science
Todd Swannack United States U.S. Army Engineer Research and Development Center
Candice Piercy United States U.S. Army Engineer Research and Development Center
Safra Altman United States U.S. Army Engineer Research and Development Center
Leanne Poussard United States NOAA National Centers for Coastal Ocean Science
Brandon Puckett United States NOAA National Centers for Coastal Ocean Science
Curt Storlazzi United States U.S. Geological Survey
Shay Viehman United States NOAA National Centers for Coastal Ocean Science


Submitted: Aug 28, 2023 | Published: Sep 15, 2023

© The Author(s) 2023.
This is an Open Access document distributed under the terms of the Creative Commons Attribution 4.0 International License https://creativecommons.org/licenses/by/4.0/deed.en .