Published June 4, 2019 | Version v1.2
Software Open

A comparison of beta-escin and whole-cell for the added buffer approach

  • 1. Köln University
  • 2. Paris-Descartes University and CNRS
  • 3. Department of Neuroscience, Karolinska Institutet



Data, codes and analysis of the added buffer approach with beta-escin perforated patch.

A joint work by: Simon Hess (, Christophe Pouzat (, Lars Paeger ( and Peter Kloppenburg (


The data, codes and analysis contained in the present repository describe a proposed improvement of the "added buffer approach" introduced by E. Neher and G. J. Augustine in their 1992 paper Calcium gradients and buffers in bovine chromaffin cells (J. Physiol 450 (1) : 273-301). This experimental method is used to estimate the calcium buffering capacitydefined in a way analogous to the pH buffering capacity of an acid-base solution. Roughly speaking, we are trying to answer the following question: When 100 calcium ions enter into the cell, how many remain actually free in the cytosol? Depending on the cell type (it is thought or argued that) most or nearly none will remain free. The reason why some at least won't remain free is the presence of endogenous buffering proteins as well as sequestration in intracellular organels like mitochondria or endoplasmic reticulum.

The added buffer approach consists in introducing into the cell a "controlled" concentration of an exogenous buffer that is also a calcium sensitive dye (a molecule whose absorption or emission properties are changing upon reversible binding of a calcium ion) like Fura-2--we therefore perform calcium imaging while disturbing the calcium dynamics of the cells we are measuring--. Then, assuming the validity of a calcium dynamics model, it is possible in principle to extract from a sequence of measurements (of decay times of [Ca^2+^] return to baseline after a stimulation) with increasing concentrations of exogenous buffer, the native or undisturbed calcium buffering capacity of the cell. The main contention point of the assumed calcium dynamics model is that the concentration of endogenous calcium-binding proteins is assumed constant; more than that, the whole [Ca^2+^] regulation machinery of the cell is assumed stable. This is contentious since the exogenous buffer is brought into the cell with the whole-cell configuration of the patch clamp method--that consists in attaching a glass capillary or pipette to the cell, to have the relative dimensions in mind one can think of attaching the Eiffel tower to a car--which in known to perturb (to write the least) the intracellular medium and to let small mobile proteins (that can be calcium-binding proteins) leave the cell--to disappear in the pipette--. The "classical way" of avoiding strong intracellular perturbation is to use the perforated patch configuration introduced by R. Horn and A. Marty in their 1988 paper Muscarinic activation of ionic currents measured by a new whole-cell recording method (J. Gen. Physiol. 92 (2): 145). But the usual perforating agents are not large enough to let the calcium indicators go into the cell. Up to now it was therefore impossible to combine the added buffer approach with perforated patch recordings. What is done in the present study is a combination of the latter two techniques using β-escin, a saponin, as the perforating agent. The companion paper describes this study and presents results showing that this perforating agent does great as far as preserving the cell integrity is concerned while letting our calcium indicator going in.

This repository focuses on the calcium dynamics issue comparing recordings done using the usual whole-cell configuration with recordings using a β-escin based perforated patch configuration. In our manuscript we argue that the latter are far superior to the former. Here we give access to data and codes allowing anyone to check (and challenge) our claims.


This repository contains:

  • beta_ecsin_supp_C.xx files where xx can be orghtml or pdf. These documents (they have the same content in different formats) contain a literate description of the C code that was developed for this study. They are meant to be read (this does not imply that the reading is necessarily pleasant!). The algorithmic and numerical choices made are spelled out step by step. Test are regularly implemented, including memory leak tests using Valgrind. The "tricky" computations are all done using functions from the Gnu Scientific Library. The code aba_ratio (the prefix stands for "added buffer approach") does the whole added buffer approach for a complete experiment--that includes a loading curve and at least three transients--.
  • Directory data_paper contains the experimental data in HDF5 format. The data are organized in sub-directories containing dopaminergic neuron recordings with β-escin (data_beta_escin) and dopaminergic neuron recordings with whole-cell (data_whole_cell). The data organization within each HDF5 file is described in the beta_ecsin_supp_C.xx documents.
  • Directory code contains the C source files together with the Makefile required to compile the code (for anyone wanting to replicate the analysis). The compilation is thoroughly described in the beta_ecsin_supp_C.xx documents.
  • Directory figs contains the figures in png shown in the beta_ecsin_supp_C.xx documents. These last two directories can be created and populated automatically from the document.
  • Directory analysis contains a document describing the Python 3 code systematic_analysis.xx used for piloting the systematic and automatic analysis of all the experiments. It contains file beta_wc_comp.html that can be used for a quick overview of the analysis results with links to the detailed analysis of each individual experiment. Two Python 3scripts, and are included (they are doing the job of automatic analysis) as well as two sub-directories: DA-beta and DA-wc containing the full results (with figures) of each experiment analysis. The two Python script together with the two sub-directories and their contents can be automatically (re)generated from the file.

Design choices

Why Cgnuplot and the shell?

We use mainly the shell (bash or zsh) for interactive analysis and write the short functions performing the actual work in C. The motivation for this approach comes from two books by Ben Klemens: Modeling With Data and 21st Century C. The main advantages of C compared to other "languages" like PythonR or Matlab:

  • Its stability (the programs written here are very likely to run unchanged in 20 years from now).
  • The development tools that come with it are just spectacular (see the very short and very clear book of Brian Gough An Introduction to GCC to understand what I mean by that).

Required software and libraries

Since a Bash or a Z shell are going to be used, Windows users will have to install CygwinLinux and MacOS users should have the bash shell by default and the zsh shell readily available from their package manager. To dig deeper into the amazing possibilities (and spectacular editorial support) of these tools, check From Bash to Z Shell. Conquering the Command Line by Kiddle, Peek and Stephenson.

The no-shell codes are going to be written in C, meaning that a C compiler together with the "classical" development tools (make, etc) are required. I'm going to use gcc here.

The heavy computational work is going to be performed mainly by the gsl (the GNU Scientific Library) that is easily installed through your package manager (from now one, for windows users, the "package manager" refers to the one of Cygwin). The graphs are be generated with gnuplot; for a quick tutorial check, for an easy to navigate set of (sophisticated) recipes check If you don't want to bother with gnuplot, the result files generated by the C codes are pure text files, they are therefore straightforward to open and read with your usual data analysis software. The data sets are in HDF5 format and the C library, as well as the command line tools, developed by the HDF5 group are going to be heavily used here.

A remark on the code presentation

As already mentioned, the literate programming approach is used here. This means that the code is broken into "manageable" pieces that are individually explained (when just reading the code is not enough), they are then pasted together to give the code that will actually get compiled. These manageable pieces are called blocks and each block gets a name like: <<name-of-the-block>> upon definition. It is then referred to by this name when used in subsequent codes. See Schulte, Davison, Dye and Dominik (2010) A Multi-Language Computing Environment for Literate Programming and Reproducible Research for further explanations. The code blocks also include documentation in Doxygen format and we try to avoid writing twice the same thing, in the text and in the documentation. So if something is "missing" from the text description, please check the documentation within the block first to see if what you're looking for is there.



Files (14.5 MB)

Additional details