There is a newer version of the record available.

Published February 8, 2019 | Version 0.11.2
Software Open

datalad/datalad: 0.11.2 (Feb 07, 2019) -- live-long-and-prosper

Description

A variety of bugfixes and enhancements

Major refactoring and deprecations
  • All extracted metadata is now placed under git-annex by default. Previously files smaller than 20 kb were stored in git. (#3109)
  • The function datalad.cmd.get_runner has been removed. (#3104)
Fixes
  • Improved handling of long commands:
    • The code that inspected SC_ARG_MAX didn't check that the reported value was a sensible, positive number. (#3025)
    • More commands that invoke git and git-annex with file arguments learned to split up the command calls when it is likely that the command would fail due to exceeding the maximum supported length. (#3138)
  • The setup_yoda_dataset procedure created a malformed .gitattributes line. (#3057)
  • download-url unnecessarily tried to infer the dataset when --no-save was given. (#3029)
  • rerun aborted too late and with a confusing message when a ref specified via --onto didn't exist. (#3019)
  • run:
    • run didn't preserve the current directory prefix ("./") on inputs and outputs, which is problematic if the caller relies on this representation when formatting the command. (#3037)
    • Fixed a number of unicode py2-compatibility issues. ([#3035]) (#3046)
    • To proceed with a failed command, the user was confusingly instructed to use save instead of add even though run uses add underneath. (#3080)
  • Fixed a case where the helper class for checking external modules incorrectly reported a module as unknown. (#3051)
  • add-archive-content mishandled the archive path when the leading path contained a symlink. (#3058)
  • Following denied access, the credential code failed to consider a scenario, leading to a type error rather than an appropriate error message. (#3091)
  • Some tests failed when executed from a git worktree checkout of the source repository. (#3129)
  • During metadata extraction, batched annex processes weren't properly terminated, leading to issues on Windows. (#3137)
  • add incorrectly handled an "invalid repository" exception when trying to add a submodule. (#3141)
  • Pass GIT_SSH_VARIANT=ssh to git processes to be able to specify alternative ports in SSH urls
Enhancements and new features
  • search learned to suggest closely matching keys if there are no hits. (#3089)
  • create-sibling
    • gained a --group option so that the caller can specify the file system group for the repository. (#3098)
    • now understands SSH URLs that have a port in them (i.e. the "ssh://[user@]host.xz[:port]/path/to/repo.git/" syntax mentioned in man git-fetch). (#3146)
  • Interface classes can now override the default renderer for summarizing results. (#3061)
  • run:
    • --input and --output can now be shortened to -i and -o. (#3066)
    • Placeholders such as "{inputs}" are now expanded in the command that is shown in the commit message subject. (#3065)
    • interface.run.run_command gained an extra_inputs argument so that wrappers like datalad-container can specify additional inputs that aren't considered when formatting the command string. (#3038)
    • "--" can now be used to separate options for run and those for the command in ambiguous cases. (#3119)
  • The utilities create_tree and ok_file_has_content now support ".gz" files. (#3049)
  • The Singularity container for 0.11.1 now uses nd_freeze to make its builds reproducible.
  • A publications page has been added to the documentation. (#3099)
  • GitRepo.set_gitattributes now accepts a mode argument that controls whether the .gitattributes file is appended to (default) or overwritten. (#3115)
  • datalad --help now avoids using man so that the list of subcommands is shown. (#3124)

Files

datalad/datalad-0.11.2.zip

Files (1.5 MB)

Name Size Download all
md5:a502aabad066504d686c8dc343c67d3a
1.5 MB Preview Download

Additional details

Related works

Funding

U.S. National Science Foundation
CRCNS US-German Data Sharing: DataGit - converging catalogues, warehouses, and deployment logistics into a federated 'data distribution' 1429999