NEWS.md
reduce_plan()
function to do pairwise reductions on collections of targets..
) from being a dependency of any target or import. This enforces more consistent behavior in the face of the current static code analysis funcionality, which sometimes detects .
and sometimes does not.ignore()
to optionally ignore pieces of workflow plan commands and/or imported functions. Use ignore(some_code)
to
drake
to not track dependencies in some_code
, andsome_code
when it comes to deciding which target are out of date.drake
to only look for imports in environments inheriting from envir
in make()
(plus explicitly namespaced functions).loadd()
to ignore foreign imports (imports not explicitly found in envir
when make()
last imported them).loadd()
so that only targets (not imports) are loaded if the ...
and list
arguments are empty..gitignore
file containing "*"
to the default .drake/
cache folder every time new_cache()
is called. This means the cache will not be automatically committed to git. Users need to remove .gitignore
file to allow unforced commits, and then subsequent make()
s on the same cache will respect the user’s wishes and not add another .gitignore
. this only works for the default cache. Not supported for manual storr
s."future"
backend with a manual scheduler.dplyr
-style tidyselect
functionality in loadd()
, clean()
, and build_times()
. For build_times()
, there is an API change: for tidyselect
to work, we needed to insert a new ...
argument as the first argument of build_times()
.file_in()
for file inputs to commands or imported functions (for imported functions, the input file needs to be an imported file, not a target).file_out()
for output file targets (ignored if used in imported functions).knitr_in()
for knitr
/rmarkdown
reports. This tells drake
to look inside the source file for target dependencies in code chunks (explicitly referenced with loadd()
and readd()
). Treated as a file_in()
if used in imported functions.drake_plan()
so that it automatically fills in any target names that the user does not supply. Also, any file_out()
s become the target names automatically (double-quoted internally).read_drake_plan()
(rather than an empty drake_plan()
) the default plan
argument in all functions that accept a plan
.loadd(..., lazy = "bind")
. That way, when you have a target loaded in one R session and hit make()
in another R session, the target in your first session will automatically update.dataframes_graph()
.diagnose()
will take on the role of returning this metadata.read_drake_meta()
function in favor of diagnose()
.expose_imports()
function to optionally force drake
detect deeply nested functions inside specific packages.drake_build()
to be an exclusively user-side function.replace
argument to loadd()
so that objects already in the user’s eOne small thing:nvironment need not be replaced.seed
argument to make()
, drake_config()
, and load_basic_example()
. Also hard-code a default seed of 0
. That way, the pseudo-randomness in projects should be reproducible across R sessions.drake_read_seed()
function to read the seed from the cache. Its examples illustrate what drake
is doing to try to ensure reproducible random numbers.!!
for the ...
argument to drake_plan()
. Suppress this behavior using tidy_evaluation = FALSE
or by passing in commands passed through the list
argument.rlang::expr()
before evaluating them. That means you can use the quasiquotation operator !!
in your commands, and make()
will evaluate them according to the tidy evaluation paradigm.drake_example("basic")
, drake_example("gsp")
, and drake_example("packages")
to demonstrate how to set up the files for serious drake
projects. More guidance was needed in light of this issue.drake_plan()
in the help file (?drake_plan
).drake
to rOpenSci: https://github.com/ropensci/drake
config
argument, which you can get from drake_config()
or make()
. Examples:
cache$exists()
instead.make()
decides to build targets.storr
cache in a way that is not back-compatible with projects from versions 4.4.0 and earlier. The main change is to make more intelligent use of storr
namespaces, improving efficiency (both time and storage) and opening up possibilities for new features. If you attempt to run drake >= 5.0.0 on a project from drake <= 4.0.0, drake will stop you before any damage to the cache is done, and you will be instructed how to migrate your project to the new drake.formatR::tidy_source()
instead of parse()
in tidy_command()
(originally tidy()
in R/dependencies.R
). Previously, drake
was having problems with an edge case: as a command, the literal string "A"
was interpreted as the symbol A
after tidying. With tidy_source()
, literal quoted strings stay literal quoted strings in commands. This may put some targets out of date in old projects, yet another loss of back compatibility in version 5.0.0.rescue_cache()
, exposed to the user and used in clean()
. This function removes dangling orphaned files in the cache so that a broken cache can be cleaned and used in the usual ways once more.cpu
and elapsed
arguments of make()
to NULL
. This solves an elusive bug in how drake imposes timeouts.graph
argument to functions make()
, outdated()
, and missed()
.prune_graph()
function for igraph objects.prune()
and status()
.analyses()
=> plan_analyses()
as_file()
=> as_drake_filename()
backend()
=> future::plan()
build_graph()
=> build_drake_graph()
check()
=> check_plan()
config()
=> drake_config()
evaluate()
=> evaluate_plan()
example_drake()
=> drake_example()
examples_drake()
=> drake_examples()
expand()
=> expand_plan()
gather()
=> gather_plan()
plan()
, workflow()
, workplan()
=> drake_plan()
plot_graph()
=> vis_drake_graph()
read_config()
=> read_drake_config()
read_graph()
=> read_drake_graph()
read_plan()
=> read_drake_plan()
render_graph()
=> render_drake_graph()
session()
=> drake_session()
summaries()
=> plan_summaries()
output
and code
as names in the workflow plan data frame. Use target
and command
instead. This naming switch has been formally deprecated for several months prior.drake_quotes()
, drake_unquote()
, and drake_strings()
to remove the silly dependence on the eply
package.skip_safety_checks
flag to make()
and drake_config()
. Increases speed.sanitize_plan()
, remove rows with blank targets “”.purge
argument to clean()
to optionally remove all target-level information.namespace
argument to cached()
so users can inspect individual storr
namespaces.verbose
to numeric: 0 = print nothing, 1 = print progress on imports only, 2 = print everything.next_stage()
function to report the targets to be made in the next parallelizable stage.session_info
argument to make()
. Apparently, sessionInfo()
is a bottleneck for small make()
s, so there is now an option to suppress it. This is mostly for the sake of speeding up unit tests.log_progress
argument to make()
to suppress progress logging. This increases storage efficiency and speeds some projects up a tiny bit.namespace
argument to loadd()
and readd()
. You can now load and read from non-default storr
namespaces.drake_cache_log()
, drake_cache_log_file()
, and make(..., cache_log_file = TRUE)
as options to track changes to targets/imports in the drake cache.rmarkdown::render()
, not just knit()
.drake
properly.plot_graph()
to display subcomponents. Check out arguments from
, mode
, order
, and subset
. The graphing vignette has demonstrations."future_lapply"
parallelism: parallel backends supported by the future and future.batchtools packages. See ?backend
for examples and the parallelism vignette for an introductory tutorial. More advanced instruction can be found in the future
and future.batchtools
packages themselves.diagnose()
.hook
argument to make()
to wrap around build()
. That way, users can more easily control the side effects of distributed jobs. For example, to redirect error messages to a file in make(..., parallelism = "Makefile", jobs = 2, hook = my_hook)
, my_hook
should be something like function(code){withr::with_message_sink("messages.txt", code)}
.Drake
was previously using the outfile
argument for PSOCK clusters to generate output that could not be caught by capture.output()
. It was a hack that should have been removed before.Drake
was previously using the outfile
argument for PSOCK clusters to generate output that could not be caught by capture.output()
. It was a hack that should have been removed before.make()
and outdated()
print “All targets are already up to date” to the console."future_lapply"
backends.plot_graph()
and progress()
. Also see the new failed()
function, which is similar to in_progress()
.parLapply
parallelism. The downside to this fix is that drake
has to be properly installed. It should not be loaded with devtools::load_all()
. The speedup comes from lightening the first clusterExport()
call in run_parLapply()
. Previously, we exported every single individual drake
function to all the workers, which created a bottleneck. Now, we just load drake
itself in each of the workers, which works because build()
and do_prework()
are exported.overwrite
to FALSE
in load_basic_example()
.report.Rmd
in load_basic_example()
.get_cache(..., verbose = TRUE)
.lightly_parallelize()
and lightly_parallelize_atomic()
. Now, processing happens faster, and only over the unique values of a vector.make_with_config()
function to do the work of make()
on an existing internal configuration list from drake_config()
.drake_batchtools_tmpl_file()
to write a batchtools
template file from one of the examples (drake_example()
), if one exists.Version 4.3.0 has:
Version 4.2.0 will be released today. There are several improvements to code style and performance. In addition, there are new features such as cache/hash externalization and runtime prediction. See the new storage and timing vignettes for details. This release has automated checks for back-compatibility with existing projects, and I also did manual back compatibility checks on serious projects.
Version 3.0.0 is coming out. It manages environments more intelligently so that the behavior of make()
is more consistent with evaluating your code in an interactive session.