Computes estimates, with confidence intervals, of the population size and probability of capture from the number of fish removed in k-, 3-, or 2-passes in a closed population.

removal(catch, method = c("CarleStrub", "Zippin", "Seber3", "Seber2",
  "RobsonRegier2", "Moran", "Schnute"), alpha = 1, beta = 1,
  CS.se = c("Zippin", "alternative"), conf.level = 0.95,
  just.ests = FALSE, Tmult = 3)

# S3 method for removal
summary(object, parm = c("No", "p", "p1"),
  digits = getOption("digits"), verbose = FALSE, ...)

# S3 method for removal
confint(object, parm = c("No", "p"),
  level = conf.level, conf.level = NULL,
  digits = getOption("digits"), verbose = FALSE, ...)

Arguments

catch

A numerical vector of catch at each pass.

method

A single string that identifies the removal method to use. See details.

alpha

A single numeric value for the alpha parameter in the CarleStrub method (default is 1).

beta

A single numeric value for the beta parameter in the CarleStrub method (default is 1).

CS.se

A single string that identifies whether the SE in the CarleStrub method should be computed according to Seber or Zippin.

conf.level

A single number representing the level of confidence to use for constructing confidence intervals. This is sent in the main removal function rather than confint.

just.ests

A logical that indicates whether just the estimates (=TRUE) or the return list (=FALSE; default; see below) is returned.

Tmult

A single numeric that will be multiplied by the total catch in all samples to set the upper value for the range of population sizes when minimizing the log-likelihood and creating confidence intervals for the Moran and Schnute method. Large values are much slower to compute, but too low of a value can result in missing the best estimate. A warning is issued if too low of a value is suspected.

object

An object saved from removal().

parm

A specification of which parameters are to be given confidence intervals, either a vector of numbers or a vector of names. If missing, all parameters are considered.

digits

A single numeric that controls the number of decimals in the output from summary and confint.

verbose

A logical that indicates whether descriptive labels should be printed from summary and if certain warnings are shown with confint.

Additional arguments for methods.

level

Not used, but here for compatibility with generic confint function.

Value

A vector that contains the estimates and standard errors for No and p if just.ests=TRUE or (default) a list with at least the following items:

  • catch The original vector of observed catches.

  • method The method used (provided by the user).

  • lbl A descriptive label for the method used.

  • est A matrix that contains the estimates and standard errors for No and p.

In addition, if the Moran or Schnute methods are used the list will also contain

  • min.nlogLH The minimum value of the negative log-likelihood function.

  • Tmult The Tmult value sent by the user.

Details

The main function computes the estimates and associated standard errors, if possible, for the initial population size, No, and probability of capture, p, for seven methods chosen with method=. The possible methods are:

  • method="CarleStrub": The general weighted k-pass estimator proposed by Carle and Strub (1978). This function iteratively solves for No in equation 7 of Carle and Strub (1978).

  • method="Zippin": The general k-pass estimator generally attributed to Zippin. This function iteratively solves for No in bias corrected version of equation 3 (page 622) of Carle and Strub (1978). These results are not yet trustworthy (see Testing section below).

  • method="Seber3": The special case for k=3 estimator shown in equation 7.24 of Seber(2002).

  • method="Seber2": The special case for k=2 estimator shown on page 312 of Seber(2002).

  • method="RobsonRegier2": The special case for k=2 estimator shown by Robson and Regier (1968).

  • method="Moran": The likelihood method of Moran (1951) as implemented by Schnute (1983).

  • method="Schnute": The likelihood method of Schnute (1983) for the model that has a different probability of capture for the first sample but a constant probability of capture for all ensuing samples..

Confidence intervals for the first five methods are computed using standard large-sample normal distribution theory. Note that the confidence intervals for the 2- and 3-pass special cases are only approximately correct if the estimated population size is greater than 200. If the estimated population size is between 50 and 200 then a 95% CI behaves more like a 90% CI.

Confidence intervals for the last two methods use likelihood ratio theory as described in Schnute (1983) and are only produced for the No parameter. Standard errors are not produced with the Moran or Schnute methods..

In the Carle Strub method, if the resultant No estimate is equal to the sum of the catches (T) then the estimate of No that is returned will be the sum of the catches. In this instance, and if the “Seber” method of computing the standard error is used, then the SE will not be estimable and the confidence intervals can not be constructed.

testing

The Carle-Strub method matches the examples in Carle and Strub (1978) for No, p, and the variance of No. The Carle-Strub estimates of No and p match the examples in Cowx (1983) but the SE of No does not. The Carle-Strub estimates of No match the results (for estimates that they did not reject) from Jones and Stockwell (1995) to within 1 individual in most instances and within 1% for all other instances (e.g., off by 3 individuals when the estimate was 930 individuals).

The Seber3 results for No match the results in Cowx (1983).

The Seber2 results for No, p, and the SE of No match the results in example 7.4 of Seber (2002) and in Cowx (1983).

The RobsonRegier2 results for No and the SE of NO match the results in Cowx (1983)

The Zippin method results do not match the examples in Seber (2002) or Cowx (1983) because removal uses the bias-corrected version from Carle and Strub (1978) and does not use the tables in Zippin (1958). The Zippin method is not yet trustworthy.

The Moran and Schnute methods match the examples in Schnute (1983) perfectly for all point estimates and within 0.1 units for all confidence intervals.

IFAR Chapter

10-Abundance from Depletion Data.

References

Ogle, D.H. 2016. Introductory Fisheries Analyses with R. Chapman & Hall/CRC, Boca Raton, FL.

Carle, F.L. and M.R. Strub. 1978. A new method for estimating population size from removal data. Biometrics, 34:621-630.

Cowx, I.G. 1983. Review of the methods for estimating fish population size from survey removal data. Fisheries Management, 14:67-82.

Moran, P.A.P. 1951. A mathematical theory of animal trapping. Biometrika 38:307-311.

Robson, D.S., and H.A. Regier. 1968. Estimation of population number and mortality rates. pp. 124-158 in Ricker, W.E. (editor) Methods for Assessment of Fish Production in Fresh Waters. IBP Handbook NO. 3 Blackwell Scientific Publications, Oxford.

Schnute, J. 1983. A new approach to estimating populations by the removal method. Canadian Journal of Fisheries and Aquatic Sciences, 40:2153-2169.

Seber, G.A.F. 2002. The Estimation of Animal Abundance. Edward Arnold, second edition (Reprint).

van Dishoeck, P. 2009. Effects of catchability variation on performance of depletion estimators: Application to an adaptive management experiment. Masters Thesis, Simon Fraser University. [Was (is?) from http://rem-main.rem.sfu.ca/theses/vanDishoeckPier_2009_MRM483.pdf.]

See also

See depletion for related functionality.

Examples

## First example -- 3 passes ct3 <- c(77,50,37) # Carle Strub (default) method p1 <- removal(ct3) summary(p1)
#> Estimate Std. Error #> No 233.0000000 31.3578504 #> p 0.3313131 0.0666816
summary(p1,verbose=TRUE)
#> The Carle & Strub (1978) K-Pass Removal Method method was used.
#> Estimate Std. Error #> No 233.0000000 31.3578504 #> p 0.3313131 0.0666816
summary(p1,parm="No")
#> Estimate Std. Error #> No 233 31.35785
summary(p1,parm="p")
#> Estimate Std. Error #> p 0.3313131 0.0666816
confint(p1)
#> 95% LCI 95% UCI #> No 171.5397426 294.4602574 #> p 0.2006195 0.4620067
confint(p1,parm="No")
#> 95% LCI 95% UCI #> No 171.5397 294.4603
confint(p1,parm="p")
#> 95% LCI 95% UCI #> p 0.2006195 0.4620067
# Moran method p2 <- removal(ct3,method="Moran") summary(p2,verbose=TRUE)
#> The Moran (1951) K-Pass Removal Method method was used (SEs not computed).
#> Estimate #> No 237.5965440 #> p 0.3223336
confint(p2,verbose=TRUE)
#> 95% LCI 95% UCI #> No 194.7 370.9
#' # Schnute method p3 <- removal(ct3,method="Schnute") summary(p3,verbose=TRUE)
#> The Schnute (1983) K-Pass Removal Method w/ Non-constant Initial Catchability method was used (SEs not computed).
#> Estimate #> No 245.0955993 #> p 0.3039926 #> p1 0.3141631
confint(p3,verbose=TRUE)
#> An upper confidence value for 'No' cannot be determined.
#> 95% LCI 95% UCI #> No 183.9 Inf
## Second example -- 2 passes ct2 <- c(77,37) # Seber method p4 <- removal(ct2,method="Seber2") summary(p4,verbose=TRUE)
#> The Seber (2002) 2-Pass Removal Method method was used.
#> Estimate Std. Error #> No 148.2250000 19.0118725 #> p 0.5194805 0.0961208
confint(p4)
#> 95% LCI 95% UCI #> No 110.9624147 185.4875853 #> p 0.3310873 0.7078737
### Test if catchability differs between first sample and the other samples # chi-square test statistic from negative log-likelihoods # from Moran and Schnute fits (from above) chi2.val <- 2*(p2$min.nlogLH-p3$min.nlogLH) # p-value ... no significant difference pchisq(chi2.val,df=1,lower.tail=FALSE)
#> [1] 0.8882765
# Another LRT example ... sample 1 from Schnute (1983) ct4 <- c(45,11,18,8) p2a <- removal(ct4,method="Moran") p3a <- removal(ct4,method="Schnute") chi2.val <- 2*(p2a$min.nlogLH-p3a$min.nlogLH) # 4.74 in Schnute(1983) pchisq(chi2.val,df=1,lower.tail=FALSE) # significant difference (catchability differs)
#> [1] 0.02955309
summary(p3a)
#> Estimate #> No 123.5879687 #> p 0.1890032 #> p1 0.3641131
### Using lapply() to use removal() on many different groups ### with the removals in a single variable ("long format") ## create a dummy data frame lake <- factor(rep(c("Ash Tree","Bark","Clay"),each=5)) year <- factor(rep(c("2010","2011","2010","2011","2010","2011"),times=c(2,3,3,2,2,3))) pass <- factor(c(1,2,1,2,3,1,2,3,1,2,1,2,1,2,3)) catch <- c(57,34,65,34,12,54,26,9,54,27,67,34,68,35,12) d <- data.frame(lake,year,pass,catch) ## create a variable that indicates each different group d$group <- with(d,interaction(lake,year)) d
#> lake year pass catch group #> 1 Ash Tree 2010 1 57 Ash Tree.2010 #> 2 Ash Tree 2010 2 34 Ash Tree.2010 #> 3 Ash Tree 2011 1 65 Ash Tree.2011 #> 4 Ash Tree 2011 2 34 Ash Tree.2011 #> 5 Ash Tree 2011 3 12 Ash Tree.2011 #> 6 Bark 2010 1 54 Bark.2010 #> 7 Bark 2010 2 26 Bark.2010 #> 8 Bark 2010 3 9 Bark.2010 #> 9 Bark 2011 1 54 Bark.2011 #> 10 Bark 2011 2 27 Bark.2011 #> 11 Clay 2010 1 67 Clay.2010 #> 12 Clay 2010 2 34 Clay.2010 #> 13 Clay 2011 1 68 Clay.2011 #> 14 Clay 2011 2 35 Clay.2011 #> 15 Clay 2011 3 12 Clay.2011
## split the catch by the different groups (creates a list of catch vectors) ds <- split(d$catch,d$group) ## apply removal() to each catch vector (i.e., different group) res <- lapply(ds,removal,just.ests=TRUE) res <- data.frame(t(data.frame(res,check.names=FALSE))) ## get rownames from above and split into separate columns nms <- t(data.frame(strsplit(rownames(res),"\\."))) attr(nms,"dimnames") <- NULL fnl <- data.frame(nms,res) ## put names together with values rownames(fnl) <- NULL colnames(fnl)[1:2] <- c("Lake","Year") fnl
#> Lake Year No No.se No.LCI No.UCI p p.se p.LCI #> 1 Ash Tree 2010 130 26.108558 78.82817 181.1718 0.4482759 0.12120594 0.2107166 #> 2 Bark 2010 95 4.247687 86.67469 103.3253 0.5894040 0.06418406 0.4636055 #> 3 Clay 2010 130 17.446161 95.80615 164.1938 0.5233161 0.10171976 0.3239490 #> 4 Ash Tree 2011 121 5.771511 109.68805 132.3120 0.5577889 0.06016508 0.4398676 #> 5 Bark 2011 103 14.760527 74.06990 131.9301 0.5328947 0.11173743 0.3138934 #> 6 Clay 2011 125 5.666291 113.89427 136.1057 0.5637255 0.05857289 0.4489247 #> p.UCI #> 1 0.6858351 #> 2 0.7152024 #> 3 0.7226831 #> 4 0.6757103 #> 5 0.7518961 #> 6 0.6785262
### Using apply() to use removal() on many different groups ### with the removals in several variables ("wide format") ## create a dummy data frame (just reshaped from above as ## an example; -5 to ignore the group variable from above) d1 <- reshape(d[,-5],timevar="pass",idvar=c("lake","year"),direction="wide") ## apply restore() to each row of only the catch data res1 <- apply(d1[,3:5],MARGIN=1,FUN=removal,method="CarleStrub",just.ests=TRUE)
#> Warning: 'NA's removed from 'catch' to continue.
#> Warning: 'NA's removed from 'catch' to continue.
#> Warning: 'NA's removed from 'catch' to continue.
res1 <- data.frame(t(data.frame(res1,check.names=FALSE))) ## add the grouping information to the results fnl1 <- data.frame(d1[,1:2],res1) ## put names together with values rownames(fnl1) <- NULL fnl1
#> lake year No No.se No.LCI No.UCI p p.se p.LCI #> 1 Ash Tree 2010 130 26.108558 78.82817 181.1718 0.4482759 0.12120594 0.2107166 #> 2 Ash Tree 2011 121 5.771511 109.68805 132.3120 0.5577889 0.06016508 0.4398676 #> 3 Bark 2010 95 4.247687 86.67469 103.3253 0.5894040 0.06418406 0.4636055 #> 4 Bark 2011 103 14.760527 74.06990 131.9301 0.5328947 0.11173743 0.3138934 #> 5 Clay 2010 130 17.446161 95.80615 164.1938 0.5233161 0.10171976 0.3239490 #> 6 Clay 2011 125 5.666291 113.89427 136.1057 0.5637255 0.05857289 0.4489247 #> p.UCI #> 1 0.6858351 #> 2 0.6757103 #> 3 0.7152024 #> 4 0.7518961 #> 5 0.7226831 #> 6 0.6785262