Dataset Version 0.1 - 15.06.2014 Admin Luca Fiaschi - luca.fiaschi@gmail.com References: [1] @ARTICLE{ fiaschi_13_keeping, year = { 2013 }, pages = { 656-659 }, journal = { ISBI 2013.Proceedings }, author = { L. Fiaschi and K. Gregor and B. Afonso and M. Zlatic and F. A. Hamprecht }, title = { {Keeping Count: Leveraging Temporal Context to Count Heavily Overlapping Objects} }, timestamp = { 2013.01.23 }, doi = { 10.1109/ISBI.2013.6556560 }, cite = { fiaschi_13_keeping } } [2] @INPROCEEDINGS{ fiaschi_14_tracking, year = { 2014 }, author = { L. Fiaschi and F. Diego and K. Gregor and M. Schiegg and U. K{\"o}the and M. Zlatic and F. A. Hamprecht }, title = { {Tracking indistinguishable translucent objects over time using weakly supervised structured learning} }, booktitle = { CVPR. Proceedings, in press }, timestamp = { 2014.03.20 }, cite = { fiaschi_14_tracking } } If using this dataset please cite reference [2]. We provide the original data used in the experiments presented in [2]. Dependencies: the data is stored as hdf5 files, please refer http://www.h5py.org/ for specifications. Various populations of 72 hours old Drosophila larvae were filmed for 5 minutes with a temporal resolution of 3.3 frames per second, 1000 frames in total. Images have a spatial resolution of 135.3 μm/pixel, a size of 1400 × 1400 pixels. For detection and segmentation of the foreground regions we use the open-source software ILASTIK (www.ilastik.org). After elimination of tiny isolated objects (τ < 15 pixels), we compute connected components of each thresholded foreground probability image of the series. The graph G is created by linking foreground regions from neighboring timesteps that overlap spatially, by more than 10 pixels. All foreground regions which are not fully inside a margin of 100 pixels from the image borders are excluded to avoid dealing with truncated larvae (cluster). This folder contains: 1) data - contains all the movies in the dataset [2] e.g. name_movie = t8_2012_10_18_1543_data4 1.1) data/name_movie contains the intermediate results of algorithm [1] and the raw data 1.1.1) data/name_movie/name_movie.h5 hdf5 file containing the data * The hdf5 file format is t,z,x,y,c * The volume "volume/data" contains the volume of the vide 8bit * The volume "volume/pmap" contains the probability map for the background/foreground segmentation * The volume "volume/segmentation" contains a foreground/ background segmentation * The volume "volume/segmentation" contains detected connected components of the foreground, each connected component has assigned a unique integer, 0 indicates background 1.1.2) data/name_movie/name_movie.graph_fast : contains the graph information from algorithm [1] * The array "graph_fast/entire/graph_fast" is compressed version of the graph edges. eg. [ 1, 13, -1, 2, 14, -1, 3, 15, -1, 4 ... ] indicates edges between connected components labeled 1-13 , 2-14, 3-15, ... where -1 is a stopper value. * The array 3*N "graph_fast/entire/positions" contains the position of the pixels of the connected components. eg [ [1 1 1], [ 0 686 529], [ 0 686 530], [ 0 686 531], [ 0 686 532], [ 0 686 533], [ 0 689 549], [ 0 689 550], [-1,-1,-1], [2,2,2] ]. The first row is the connected component label while [-1,-1,-1] is a stopper value. Values between [1,1,1] and [-1,-1,-1] are pixels positions as t,x,y * The array "graph_fast/entire/sizes" is array indicating the size (in pixel) of the connected components. e.g. array[1] is the size in pixel of the connected component labeled 1 * The array "graph_fast/entire/times" is the time of which connected component is labeled. e.g. array[1] is the time where connected component 1 is present. 1.1.3) data/name_movie/name_movie.res : contains the inferred number of objects per connected component from algorithm [1]. * The array "graph_fast/entire/true_sizes" contains the number of objects per connected component. e.g. array[1] is the number of objects in connected components 1. 1.1.4) data/name_movie/movie_sizes : contains a visualization of the results of algorithm [1]. Please refer to the original paper. 1.1.5) data/name_movie/movie_gt_tracks : contains a visualization of the GT tracks. This values were not used in the evaluation of [2] which is only based on the overlapping spatiotemporal regions. Tracks shorter than 20 should be filtered out as could be due to spurious errors in the manual tracking. 1.1.6) data/name_movie/name_movie.txt : contains the GT tracks for the movie. 2) cvpr_all_enconters: overlapped spatiotemporal regions are obtained by parsing the graph of algorithm [1] and used in the evaluation of algorithm [2]. These are divide per encounter type. Encounter file format specifications * The hdf5 volume format is t,z,x,y,c * The volume subgroup contains the cropped information for the overlappend spatiotemporal region. * The volume "volume/userbrush" is the ground truth labelling from the user marking each connected component before entering the overlapping region and after exiting the overlapping region with a unique brush stroke. Integer value. * The array "meta/info" indicates the location of the spatiotemporal region in the original movie volume. For example [info[0]:info[1]+1 ] indicates the region in t, [info[2]:info[3]+1 ] idicates the region in x and [info[4]:info[5]+1 ] indicates the region in y. * The array "meta/inlarvae" contains the id of the connected components of isolated larvae entering the region. * The array "meta/outlarvae" contains the id of the connected components of isolated larvae leaving the region.