taupy
Creators
Description
This repository contains taupy, version 0.2.0, together with notebooks and data for the paper “Arguments as drivers of issue polarisation in debates among artificial agents”, published in JASSS.
taupy serves as an implementation of the TDS model described in the paper. It requires Python ≥ 3.9 and pip (available by default given Python 3.9). To install taupy from the source code provided here, please download and unpack taupy-v0.2.0.zip and use pip from the command line in the parent folder:
python -m pip install taupy/
If you are on a Linux operating system, dd will automatically install the dd.cudd module which contains a compiled version of CUDD. On Windows and Mac OS, dd will not automatically install CUDD, and taupy will rely on dd's pure Python implementation of binary decision diagrams instead. This does not have implications for functionality, but may have for computation time.
Update (November 2022): In the original publication, the values for dispersion were not correctly normalised to the [0, 1] domain. Please find instructions below on how to correct this in the notebooks.
The notebooks are in the notebooks folder and they contain the information that one needs to pass to taupy in order to obtain the results presented in the paper. The notebooks contain additional documentation of this code. Except for experiments.ipnyb, the notebooks do not require taupy to tun, but they do require ipykernel and some common scientific Python packages, such as numpy. The requirements of the individual notebooks are all listed in the first cell of each notebook.
The workflow to create the simulation results from scratch is as follows:
- Begin by running the experiments (notebooks/experiments.ipynb) or select the pre-compiled simulation runs (notebooks/data/*pkl) or raw polarisation values (notebooks/data/*zip). Please note that running the experiments in the original settings will not be possible without access to an HPC, and generating the raw polarisation values from the pickled simulations (*pkl) will be intensive as well, in particular concerning the RAM. However, small toy experiments can be run on any personal computer. So it's recommended to work on the pre-compiled results (from the notebooks/data/*zip) files.
- There are a number of different notebooks, depending on which data you'd like to analyse and which output you'd like to achieve:
- Use notebooks/ari-analysis.ipynb to visualise the results of the ARI analysis (Figure 10 in Appendix B)
- Use notebooks/distributions.ipynb to compile Figure 6 and Table 2.
- notebooks/lineplots.ipynb compiles Figure 3, 5, and 7.
- notebooks/subset_clust-initially-polarised.ipynb has the data for Figures 8 and 9 (Robustness analysis)
- notebooks/auxiliary-analysis.ipynb compiles Figures 11 and 12 in Appendix B.
- notebooks/heatmaps.ipynb compiles the additional data Figures from Appendix C.
Update (November 2022): For the correct normalisation of dispersion values, the following transformations need to be applied in the notebooks.
In lineplots.ipynb, please add the following code below the code block “Let's begin by loading all the data”:
convert_var["dispersion"] = 2 * convert_var["dispersion"] undercut_var["dispersion"] = 2 * undercut_var["dispersion"] attack_var["dispersion"] = 2 * attack_var["dispersion"] fortify_var["dispersion"] = 2 * fortify_var["dispersion"] any_var["dispersion"] = 2 * any_var["dispersion"]
In heatmaps.ipynb, the corrected code block just below the heading “Heatmaps for dispersion (Figure 13) is the following:
convert_var = pd.read_pickle("data/3-50-20convert_var.zip") convert_var["dispersion"] = 2 * convert_var["dispersion"] convert_dispersion = convert_var[convert_var["dispersion"] > -1].copy() convert_dispersion["strategy"] = "convert" undercut_var = pd.read_pickle("data/3-50-20undercut_var.zip") undercut_var["dispersion"] = 2 * undercut_var["dispersion"] undercut_dispersion = undercut_var[undercut_var["dispersion"] > -1].copy() undercut_dispersion["strategy"] = "undercut" fortify_var = pd.read_pickle("data/3-50-20fortify_var.zip") fortify_var["dispersion"] = 2 * fortify_var["dispersion"] fortify_dispersion = fortify_var[fortify_var["dispersion"] > -1].copy() fortify_dispersion["strategy"] = "fortify" attack_var = pd.read_pickle("data/3-50-20attack_var.zip") attack_var["dispersion"] = 2 * attack_var["dispersion"] attack_dispersion = attack_var[attack_var["dispersion"] > -1].copy() attack_dispersion["strategy"] = "attack" any_var = pd.read_pickle("data/3-50-20any_var.zip") any_var["dispersion"] = 2 * any_var["dispersion"] any_dispersion = any_var[any_var["dispersion"] > -1].copy() any_dispersion["strategy"] = "any"
I'd like to apologise for the inconvenience!
Files
notebooks.zip
Files
(376.8 MB)
Name | Size | Download all |
---|---|---|
md5:90d9dc7f2a1fa2fcb39b19d4d40552b4
|
376.8 MB | Preview Download |
md5:24100f575e25178f7bad4ff88b27f9d5
|
27.7 kB | Preview Download |