Published October 24, 2024 | Version v2
Computational notebook Open

Unraveling the complexity of rat object vision requires a full convolutional network - and beyond

  • 1. ROR icon Scuola Internazionale Superiore di Studi Avanzati
  • 2. ROR icon University of California, Davis

Description

Unraveling the complexity of rat object vision requires a full convolutional network - and beyond

 

This repository contains source code and data for the paper Unraveling the complexity of rat object vision requires a full convolutional network - and beyond, Muratore et al. (2024).

 

Reproducing Paper Results

 

To ease the process of reproducing the paper results we provide five Jupyter Notebooks. Each notebook analyzes data from one of the three datasets considered (Zoccolan et al. (2009), Alemi et al. (2013), Djurdevic et al. (2018)). Each notebook is structured in such a way that the initial cells perform the computation for the different measurements explored (and store the results), while the second part of each notebook load the results and assemble the different visualizations presented in the paper. In particular:
 
  •  📝 1 - CNN on Zoccolan et al. (2009): Computes all the results related to the dataset introduced in Zoccolan et al. (2009) , and produces the results of Figure 3.

 

  • 📝 2A - CNN on Alemi et al. (2013): Compute the accuracy analysis performed on the dataset introduced in Alemi et al. (2013), and produces the results of Figure 4.

 

  • 📝 2B - Saliency & Overlap on Alemi et al. (2013): Computes the visual strategy analysis performed on the dataset introduced in Alemi et al. (2013), and produces the results of Figure 5 and Figure 6.

 

  • 📝 3 - CNN on Djurdevic et al. (2018): Compute all the results related to the dataset introduced in Djurdevic et al. (2018), and produces the results of Figure 7.

 

  • 📝 4 - Control Analyses: Compute all the additional control analyses, in particular the null-controls for random VGG-16 and a parameter- and depth-match Multi Layer Perceptron (MLP), plus the control for a network trained with heavy image blur from Jang & Tong, Nat. Comm. (2024). This notebook produces the panels for Figure 8, 9 and 10.

 

Together with the original dataset we provide intermediate results (es: pre-computed saliency maps for `vgg16`) that can be use to reproduce exactly the all Figures of the paper. These files are collected in the 📂 data folder (see Repository Structure section for a more in-depth description).

 

Repository Structure

 

The repository is structured as follows. The core Python package is contained within the 📂 src folder, which contains several sub-modules. The most relevant are:

 

- 📂 src.scaling: contains the routines (shared between the presented experiment) that analyze CNN classification accuracy as a function of increasing feature size (recorded units).

 

- 📂 src.saliency: contains the routines to compute the saliency maps (visual strategies) based on the stimuli and the classifier outputs.

 

- 📂 src.dataset: contains the `torch.ImageDataset` that perform the loading of the three different datasets used in the study.

 

The data nedeed to reproduce the experiments are collected within the 📂 data folder. This folder is organized into dataset-specific subfolder:
 
📂 data
├── 📂 alemi (Alemi et al. 2013)
├── 📂 pnas (Zoccolan et al. 2009)
└── 📂 vlad (Djurdevic et al. 2018)
 

 

Each dataset sub-folder is further subdivided into the different scales (📂 1K, 📂 5K, ...), the subjects (📂 rat, 📂 vgg16, ...), the 📂 stimuli, the controls 📂 random and 📂 blur-training, plus optional dataset-specific additional folders (es: 📂 alemi/other). Each scale subfolder contains the results for the accuracy/correlation analysis, while results for the visual strategy (saliency maps & overlaps) are subject-specific and contained within the subject subfolder (es: 📂 alemi/vgg16/saliency).

Files

paper_source_v2.zip

Files (1.9 GB)

Name Size Download all
md5:10c56f60254d0e558e84dd52f8cac4cd
1.9 GB Preview Download

Additional details

Related works

Cites
Publication: 10.1073/pnas.0811583106 (DOI)
Publication: 10.1016/j.cub.2018.02.037 (DOI)
Has version
Publication: 10.1523/JNEUROSCI.3629-12.2013 (DOI)

Dates

Submitted
2024-05-25

Software

Programming language
Python