The scripts for this dataset were written using python 3.10.4. It is recommended to use a virtual environment. To run the scripts, use the command pip install -r requirements.txt in the Code directory. Modify config.json to point to where you extracted the data files and where you want cached experiment results, experiment figures, and saved models to be stored. Make sure to run the Scripts from the Code directory. Paths in config.json can be relative to the code directory or absolute. The data files come in two formats. The first is the raw data, stored in two files. One contains the IQ sample data and the other contains the GT data. The Experiment and Dataset class files in the Scripts directory automates the process of matching the file names (consisting of date and time of the experiment) to the official experiment name through Conversions\TI.json, as well as the process of calculating the GT for a CTE packet, among many other necessities. The second format is a much more efficient format which should be considered the default. It consists of pre-extracted information which would usually need to be calculated from the first format, and has been arranged such that the Experiment and Dataset classes can use memory mapping to greatly reduce the memory usage and time spent of any experiments performed. If you wish to create your own TensorFlow Datasets, you can do so by modifying the config.json. The following options in the config affect what data is included into a dataset in create_tf_dataset_*.py: "include_continuous" (0/1) -> Whether to include the experiments where the tag constantly moves "ds_include_zigzag" (0/1) -> Whether to include the experiments where the distance between the tag and anchor constantly varies (is only included if "include_continuous" = 1) "ds_include_obstacles" (0/1) -> Whether to include the experiments which have obstacles between the tag and the anchor "phase_corr" (0/1) -> whether to apply Carrier Frequency Offset (CFO) to the IQ samples "phase_corr_experimental" (0/1) -> deprecated "standardize_inputs" (0/1) -> Standardize the inputs to have a mean of 0 and standard deviation of 1 "normalize_inputs" (0/1) -> whether to scale the inputs to be between 0 and 1 "normalize_labels" (0/1) -> Whether to scale the output to be between 0 and 1 "keep_filtered_angles" (0/1) -> Whether to keep the samples which correspond to the times where the tag was in motion in the stopping experiments, referred to as the interstitial angles "ds_range_50_50" (0/1) -> Whether to only keep samples where the GT angle falls in the range [-50, 50] degrees "single_height" (0/1) -> Whether or not to only use samples where the tag is at a single elevation "height" ("800"/"1100"/"1400") -> if "single_height" = 1, this is the elevation (in mm) that is to be used "single_distance" (0/1) -> Whether or not to use only samples where the tag is at a single distance from the anchor "distance" ("1500"/"2000"/"2500"/"3000") -> if "single_distance" = 1, this is the distance (in mm) that is to be used "single_experiment" (0/1) -> Whether or not to only use a single combination of height and distance, depending on other settings this could actually be between 1 and 4 experiments "ds_val_percent" ([0, 1]) -> The percentage of samples to be set aside for a validation set, expressed as a decimal between 0 and 1 inclusive "ds_test_percent" ([0, 1]) -> The percentage of samples to be set aside for a testing set, expressed as a decimal between 0 and 1 inclusive "ds_batch_size" -> the size of individual batches that will be made of the completed tensorflow dataset. Each of these setting also affect which experiments are used in continuous_movement_verification.py which calls the angle of arrival (aoa) determination algorithm chosen in the "aoa_det_alg" setting of the config. "aoa_det_alg" must be one of the values provided in "poss_algs". Implementation of training models is left to the user, but testing of models on unseen data can be done with this code by setting "aoa_det_alg" to "ML" and choosing the trained model from the list. "moving_avg" (0/1) -> determines whether a moving average is applied to the results of continuous_movement_verification.py "save_data" (0/1) -> determines whether results will be cached for future runs "load_data" (0/1) -> determines whether results from previous runs will be loaded from disk If both of these settings are 1 at the time of calling helpers.load_config then they will both be set to 0