Blodgett, D., Johnson, J.M., 2022, nhdplusTools: Tools for
Accessing and Working with the NHDPlus,
https://doi.org/10.5066/P97AS8JD
install.packages("nhdplusTools")
For the latest development:
install.packages("remotes")
::install_github("DOI-USGS/nhdplusTools") remotes
For data discovery and access in a U.S. context, start with the Getting Started page.
Detailed documentation of all the package functions can be found at the Reference page.
nhdplusTools
, is designed to provide easy access to data
associated with the U.S. National Hydrography Dataset. Many functions
provided in nhdplusTools
are thin wrappers around functions
that have been migrated to hydroloom
.
The nhdplusTools
package is intended to provide a
reusable set of tools to subset, relate data to, and generate network
attributes for U.S. NHDPlus data.
General, globally applicable functionality has been moved to hydroloom
nhdplusTools
implements a data model consistent with
both the NHDPlus
dataset and the HY_Features data
model. The package aims to provide a set of tools that can be used to
build workflows using NHDPlus data.
This vision is intended as a guide to contributors – conveying what kinds of contributions are of interest to the package’s long term vision. It is a reflection of the current thinking and is open to discussion and modification.
The following describe a vision for the functionality that should be included in the package in the long run.
The NHDPlus is a very large dataset both spatially and in terms of the number of attributes it contains. Subsetting utilities will provide network location discovery, network navigation, and data export utilities to generate spatial and attribute subsets of the NHDPlus dataset.
One of the most important roles of the NHDPlus is as a connecting
network for ancillary data and models. The first step in any workflow
that uses the network like this is indexing relevant data to the
network. A number of methods for indexing exist, they can be broken into
two main categories: linear referencing and catchment indexing. Both
operate on features represented by points, lines, and polygons.
nhdplusTools
should eventually support both linear and
catchment indexing.
Given that nhdplusTools
is focused on working with
NHDPlus data, the NHDPlus data model will largely govern the data model
the package is designed to work with. That said, much of the package
functionality also uses concepts from the HY_Features standard.
Note: The HY_Features standard is based on the notion that a “catchment” is a holistic feature that can be “realized” (some might say modeled) in a number of ways. In other words, a catchment can only be characterized fully through a collection of different conceptual representations. In NHDPlus, the “catchment” feature is the polygon feature that describes the drainage divide around the hydrologic unit that contributes surface flow to a given NHD flowline. While this may seem like a significant difference, in reality, the NHDPlus COMID identifier lends itself very well to the HY_Features catchment concept. The COMID is used as an identifier for the catchment polygon, the flowline that connects the catchment inlet and outlet, and value added attributes that describe characteristics of the catchment’s interior. In this way, the COMID identifier is actually an identifier for a collection of data that together fully describe an NHDPlus catchment. See the NHDPlus mapping to HY_Features in the HY_Features specification.
Below is a description of the scope of data used by the
nhdplusTools
package. While other data and attributes may
come into scope, it should only be done as a naive pass-through, as in
data subsetting, or with considerable deliberation.
Flowline geometry is a mix of 1-d streams and 1-d “artificial paths”. In order to complete the set of features meant to represent water, we need to include waterbody polygons.
Catchment polygons are the result of a complete elevation derived hydrography process with hydro-enforcement applied with both Watershed Boundary Dataset Hydrologic Units and NHD reaches.
The NHDPlus includes numerous attributes that are built using the network and allow a wide array of capabilities that would require excessive iteration or sophisticated and complex graph-oriented data structures and algorithms.
The NHDPlus is a very large dataset. The architecture of this package as it relates to handling data and what dependencies are used will be very important.
nhdplusTools
offers a mix of web service and local data
functionality. Web services have generally been avoided for large
processes. However, applications that would require loading significant
amounts of data to perform something that can be accomplished with a web
service very quickly are supported. Systems like the Network Linked Data
Index are used for data discovery.
Initial package development focused on the National Seamless NHDPlus database. NHDPlus High Resolution is also supported.
https://github.com/mbtyers/riverdist
https://github.com/jsta/nhdR
https://github.com/lawinslow/hydrolinks
https://github.com/mikejohnson51/HydroData
https://github.com/ropensci/FedData
https://github.com/hyriver/pygeohydro … others – please
suggest additions?
This package uses a convention to avoid building vignettes on CRAN.
The BUILD_VIGNETTES
environment variable must be set to
TRUE
. This is done with a .Renviron file in the package
directory with the line BUILD_VIGNETTES=TRUE
.
Given this, the package should be built locally to include vignettes using:
::build() devtools
In addition to typical R package checking, a Dockerfile is included in this repository. Once built, it can be run with the following command.
docker build -t nhdplustools_test .
docker run --rm -it -v $PWD:/src nhdplustools_test /bin/bash -c "cp -r /src/* /check/ && cp /src/.Rbuildignore /check/ && cd /check && Rscript -e 'devtools::build()' && R CMD check --as-cran ../nhdplusTools_*"
First, thanks for considering a contribution! I hope to make this package a community created resource for us all to gain from and won’t be able to do that without your help!
testthat
.tidyverse
style
guide.Other notes: - consider running lintr
prior to
contributing. - consider running goodpractice::gp()
on the
package before contributing. - consider running
devtools::spell_check()
if you wrote documentation. - this
package uses pkgdown. Running pkgdown::build_site()
will
refresh it.
This information is preliminary or provisional and is subject to revision. It is being provided to meet the need for timely best science. The information has not received final approval by the U.S. Geological Survey (USGS) and is provided on the condition that neither the USGS nor the U.S. Government shall be held liable for any damages resulting from the authorized or unauthorized use of the information.
From: https://www.usgs.gov/office-of-science-quality-and-integrity/fundamental-science-practices#5
This software is in the public domain because it contains materials that originally came from the U.S. Geological Survey, an agency of the United States Department of Interior. For more information, see the official USGS copyright policy
Although this software program has been used by the USGS, no warranty, expressed or implied, is made by the USGS or the U.S. Government as to the accuracy and functioning of the program and related program material nor shall the fact of distribution constitute any such warranty, and no responsibility is assumed by the USGS in connection therewith. This software is provided “AS IS.”