plan
argument of make()
.R/workplan.R
Turns a named collection of target/command pairs into
a workflow plan data frame for make()
. You can give the commands
as named expressions, or you can use the list
argument to supply them as character strings.
drake_plan(..., list = character(0), file_targets = NULL, strings_in_dots = pkgconfig::get_config("drake::strings_in_dots"), tidy_evaluation = TRUE)
... | A collection of symbols/targets with commands assigned to them. See the examples for details. |
---|---|
list | A named character vector of commands with names as targets. |
file_targets | deprecated argument. See |
strings_in_dots | deprecated argument for handling strings in
commands specified in the To fully embrace the glorious new file API, call
In the past, this argument was a character scalar denoting
how to treat quoted character strings in the commands
specified through |
tidy_evaluation | logical, whether to use tidy evaluation
such as quasiquotation
when evaluating commands passed through the free-form
|
A data frame of targets and commands. See the details for optional columns you can append manually post-hoc.
A workflow plan data frame is a data frame
with a target
column and a command
column.
Targets are the objects and files that drake generates,
and commands are the pieces of R code that produce them.
To use custom files in your workflow plan,
use the file_in()
, knitr_in()
, and
file_out()
functions in your commands.
the examples in this help file provide some guidance.
Besides the target
and command
columns, there are optional columns
you may append to your workflow plan data frame:
trigger
: a character vector of triggers. A trigger is a rule for
when to cause a target to (re)build. See triggers()
for your options.
For a walkthrough, see
https://github.com/ropensci/drake/blob/master/vignettes/debug.Rmd#test-with-triggers. # nolint
retries
: number of times to retry a target if it fails
to build the first time.
timeout
: Seconds of overall time to allow before imposing
a timeout on a target. Passed to R.utils::withTimeout()
.
Assign target-level timeout times with an optional timeout
column in plan
.
cpu
: Seconds of cpu time to allow before imposing
a timeout on a target. Passed to R.utils::withTimeout()
.
Assign target-level cpu timeout times with an optional cpu
column in plan
.
elapsed
: Seconds of elapsed time to allow before imposing
a timeout on a target. Passed to R.utils::withTimeout()
.
Assign target-level elapsed timeout times with an optional elapsed
column in plan
.
evaluator
: An experimental column. Each entry is a function
passed to the evaluator
argument of future::future()
for each worker in make(..., parallelism = "future")
.
test_with_dir("Contain side effects", { # Create workflow plan data frames. mtcars_plan <- drake_plan( write.csv(mtcars[, c("mpg", "cyl")], file_out("mtcars.csv")), value = read.csv(file_in("mtcars.csv")), strings_in_dots = "literals" ) mtcars_plan make(mtcars_plan) # Makes `mtcars.csv` and then `value` head(readd(value)) # You can use knitr inputs too. See the top command below. load_basic_example() head(my_plan) # The `knitr_in("report.Rmd")` tells `drake` to dive into the active # code chunks to find dependencies. # There, `drake` sees that `small`, `large`, and `coef_regression2_small` # are loaded in with calls to `loadd()` and `readd()`. deps("report.Rmd") # Are you a fan of tidy evaluation? my_variable <- 1 drake_plan( a = !!my_variable, b = !!my_variable + 1, list = c(d = "!!my_variable") ) drake_plan( a = !!my_variable, b = !!my_variable + 1, list = c(d = "!!my_variable"), tidy_evaluation = FALSE ) # For instances of !! that remain unevaluated in the workflow plan, # make() will run these commands in tidy fashion, # evaluating the !! operator using the environment you provided. })