0.0.15
    o fix report: include model-averaged table for covariate and litter effect
0.0.14 
	o Changed layout of model weights table in Advanced Options
	o Implemented error catching when fitting the models
	o Disabled option to choose sampling method for anydoseresponse-functions for quantal data (both clustered and regular). Option 
	remains only for clustered continuous individual data. 
	o Updated test file for clustered quantal data
	o Updated default values for priors depending on whether or not extended input has been checked
	o Catch errors in model fitting functions
	o BMABMDR version 9052
	o Fixed error in anydoseresponse when an analysis is run with litter effect and afterwards a continuous summary analysis is run
	o Fixed tables for clustered amalyses
	
0.0.13
	o Added warning when posterior distribution has been truncated (not for covariate)
	o Added spinner around analysis output
	o Set seed when fit Models button is pressed
	o Allowed running analysis when there is no dose response effect
	o Updated sensitivity analysis

0.0.12
	o Implemented the deletion of the defaultAdaptedWeights-scenario when either one of the Distribution types is unchecked
	o Created an extra panel for the anydoseresponse-function output
	
0.0.11
	o Implemented extended-input in Advanced Settings
	o Disable certain Advanced Settings when running an analysis with a covariate
	o When performing sensitivity analyses if one of the distribution types is unchecked (either "Normal" or "Lognormal"):
		- For continuous summary data: the defaultAdaptedWeights-scenario is removed when the Bartlett test for the
		remaining Distribution type is rejected 
		- For continuous individual data: the defaultAdaptedWeights-scenario is removed when the Levene's test for the
		remaining Distribution type is rejected
	o For analysis with a covariate:
		- When one of the two Distribution types is unchecked, the corresponding models and plots are filtered out in the output

0.0.10
	o Activate option to send the results via mail
	o packages gamlss is a dependency of BMABMDR that isn't added to their package yet
	o Analysis report:
	    - Only show results from the default analysis page
		- Justification: provide the option to justify when a user has used different values than the default ones 
		to run the analysis. In practice: make a list of all default values, and then loop over them with the values
		that were actually used, and see which ones differ. 
	
	o Package/project report:
		- Best to do this on a Windows machine that has Word installed to avoid layout problems in LibreOffice
		- Include as reference: https://zenodo.org/record/7118583#.YzrWyNJBxkh
		- Yellow marked text will be adapted by the EFSA user
		- Specify the raam contract number (8 for this project) 
		- Copy text from project description/functin descriptions
		- See template EFSA reports and the BMD report from Machteld
		
	o Tests written by Machteld for BMABMDR package have been added to test files of bmd
	o Removed binary from response type options
	o Fixed bug in selecting Distribution type in Fit models tab
		- The way the prior weights are set internally in the app is as follows:
		1. When distribution types are not changed, weights for both types of models are set to 1
		2. When N or LN is unchecked, weights for that type are set to 0
		3. When model weights are selected/show via the Advanced Settings, these are the 
		weights that will be used. 
			
	o Placed the current scenario number over the total number of scenarios to be fitted in the progress bar of the output. Can also put scenario name if wanted.
	If they want the total progress in 1 progress bar, the code inside the fit models functions of BMABMDR needs to be adapted and an argument must be
	foreseen so we can indicate how many scenario's are to be plotted. 
	
	o TODO: 
		- Add part in the project report on how the prior weights are set backend
		- Add part in the project report explaining where the values for the Prior parameters come from 
		- Add part in project report explaining where section Dose response effect comes from 
		when selecting quantal clustered data
		- Add part in project report explaining selection of covariates + that no prior can be selected if a covariate is 
		selected
	
0.0.9
	o Showed Levene's test in output for individual data 
	o Want an extra warning if assumption of constant variance is rejected for both distributions or 
	is the text in red sufficient? 
	o Additional analyses for individual data based on outcome of Levene's test, making Bartlett test basically redundant,
	but output is still shown. 
	o Implemented Shapiro-Wilks test for individual data:
		- When null hypothesis is rejected for either normal or lognormal distribution, the corresponding prior weights are set 
		to 0. After that, Levene's test determines whether the prior weights are changed again depending on whether or not 
		the equal variance assumption is rejected for either the normal or lognormal distribution. 
		- When both distributions fail the Shapiro-Wilks test, prior weights are set to 1 but an additional WARNING is given in the app
	o Added note for summary data that not all checks might have been done to assure correct results
	
	o For clustered data:
		- Asked for summary function for continuous clustered data
		- How are we going to use the summary data with the adapted variances based on the outcome of the Levene's test 
		as an input for the PREP_DATA_N_C functions, which only accept individual data? (in the documentation of the function
		it states that there is an argument to indicate whether its summary or individual data, but this is not
		used in the function) 
		- Quantal data: problem with anydoseresponseQ-function, which keeps on running when argument cluster = TRUE (also in 
		example script from Cecile)
		- Quantal data: is the litter column actually used in the PREP_DATA_QA-function? (I dont think so)
		- Can quantal data also be individual? Would say so since it has the sumstats-argument, but so far we have always 
		assumed it to be summary data
		

0.0.8
	o Moved BMD feasibility analysis to Fit Models tab
	o Changed parameter options of prior D 
	o Fixed issues with prior parameters
	o Implemented Shape parameters for prior parameters
		- Unticked checkbox: value = 0.0001, ticked: value = 4
	o For continuous summary data:
		- We can now have 4 outputs depending on the outcome of the bartlett-function (where the assumption is rejected for lognormal distribution)
			1. Using the original data and original weights
			2. Using the original data and weights set to 0 for lognormal models (depending on which distribution rejects the assumption)
			3. Using minimum variation data and original weights (i.e. rep(1,16))
			4. Using maximum variation data and original weigths
		- How to assign the specified weights from the Advanced Settings? 
			- How it currently is implemented, is that when the prior.weights are set as discussed above, unless 
			the conditionalPanel of the model weights in the advanced settings is opened
	o Different data types have different assumptions/checks: for continuous summary we have the bartlett-test. 
	For continuous individual we need to use the Shapiro-Wilks test, but we also have the output of the bartlett test. Do we only
	look at the outcome of the Shapiro-Wilks test?
		- Is the NLN_test-function in the BMABMDR-package?
		- Which threshold should be used (5 % or 10 %)?
	Is there also a check for quantal data? 
	o Also different sensitivity analyses for continuous individual data as for continuous summary data when the check fails?
	o When there are samples containing NA (for individual data), the sample is currently being removed to perform
	complete case analysis
		- Warning message is supplied in UI
	o Use anydoseResponseC-function for individual data? What with 3rd column in data-argument?  
	o Output of continuous individual data is fairly strange, but might be related to bypassing the Shapiro-Wilks 
	test (currently output of the test is not taken into account)
	o We need to provide a warning when the Bayes Factor of every model is > 10
		- Where can we read out this value? 
		- Where does the warning need to be in the UI?
	o Did some testing:
		- Using Bridge Sampling seems to take a very long time
	o Added subtitles per response and scenario for Advanced Plots
	o Implemented checks in output
		- Might be a good idea to add which data is used for which test to avoid 
		confusion.
	
0.0.7
	o Implemented the new version of the BMABMDR package
	o In the prior parameters:
		- Added prior parameter D 
		- Default upper bound for BMD prior has been set to maxDose^2, which in our case results in 10000
	o Implemented the different scenario's for homoscedasticity according to the example script of Cecile
		- Updated the UI to deal with the different scenarios
			- dropDown menu's per response to select the intended scenario
			- Comments on the naming of the scenario's? (add N or LN to name?)
		- Problem with defining the prior weights
			- Using other weights than rep(1,16) results in an error from plot.BMADR when passing the output from modelFit
			error appears to be related to line 622: 
			"Warning in max(Dose * mod.obj$max.dose) : no non-missing arguments to max; returning -Inf" (will be fixed
			on their side)		
	o There are some additional bugs that appear in the console which have been reported to Cecile and Daniel:
		- Error from function full.laplace_MA:
			"Error in chol.default(-H) : the leading minor of order 3 is not positive definite" (will be fixed
			on their side)
		- Litter effect: 
			- PREP_DATA-functions for continuous data are not able to handle this yet (don't have a 
			'cluster' argument)
			- anydoseresponseQ-function is not able to handle the argument 'cluster = TRUE' (Daniel says
			it is a problem with quantal data but the function seems to run of quantal data with 'cluster =
			FALSE')
			- Cannot implement this if this is not fixed 
		- Problem with using Bridge sampling: function keeps on running (see warning messages in console 
		when running) (not yet reported)

	o Implemented choice between responses/scenario's in advanced plots and adapted UI to allow comparing 
	between different responses/scenarios
		- Add option to remove UI objects? 
			
	o BMD feasibility check: do you want this for continuous individual data? Because then we need to specified columns for Dose, SD,
	 Response and Size: in the continuous individual data 
	that we use, this is not specified nor is it in the same order. Do we change the UI so you choose the response (and SE) in the data tab
	instead of in the Fit Models tab? (for the moment, BMD feasibility is only done for continuous summary data)
	
	o Individual data:
		- No need to summarize ourselves: is incorporated in PREP_DATA functions through the sumstats-argument
		- sans and nans inputs are set manually when dataType == "continuous individual"
		- error in PREP_DATA-function:
			function is not able to handle data with missing values

	o TODO:
		- Report
			- Make tables in same format as template?
			- Which plots to include? What when there are multiple scenarios/mulitple responses?
			- Include advanced plots? All of them (for all scenario's)?
			- Provide the template text for sections Data description, Justification and Conclusion? 
		- Reactivity:
			Switching between single response continuous summary and single response quantal is oke
			Switching between multiple response continuous summary and single response continuous summary is oke
			Switching between quantal and multiple response continuous summary also oke
		- The plotServer/selectPlot module is called multiple times 
		when switching between tabs
		- When setting parameter D to EPA, this causes the Barlett outcome to accept the H0 but still there are
		multiple responses being created. 

0.0.6 
	o Two more Warnings in R CMD check:
		- problem with loading package AICcmodavg
		- package dependencies of BMABMDR give problems: packages should be loaded via 
		'Imports' in function description rather than loading via globalFunction (see link globalFunction)
		but this gives errors (problem with gamlss package in PREP_DATA_LN)
	o Added icons to BMD feasibility message
	o Adjusted BMD weights table output based on sampling method:
		- Renamed "Laplace sampling" to "Full Laplace"
		- Labelled "LP_Weights" as "Model weigths"
		- In case of Bridge sampling, labelled "BS_Weights" as "Model Weights" + dropped "LP_Weights" column
	o Fixed informative priors
		- Rounding rules:
			- we round to 2 decimals unless: 
				- if rounding to 2 decimals leads to == 0, we keep the original value
				- if rounding leads to == 0 but the amount of decimals is > 5, we still round 
				(to 0, internally transformed to 2.23e-306)
	o TODO: warning if resulting BMDU value is equal to the input/default maximum for prior BMD (see previous meeting)		
	o TODO: Bug in informative priors: missing values are set to 0 instead of NA?
	o Implemented check for homoscedasticity (only for continuous data)
		- QUESTION: should we run the analysis again when either of the checks (normal & lognormal) fail or when both fail? 
	o TODO: reactivity issues again due to implementation of homoscedasticity check
		- Bug in output when new data is uploaded: previous output is shown before new output is 
		calculated	
	o QUESTION: How to deal with multiple responses in Advanced Plots tab?
		- Add a selectInput in the wellPanel/add another wellPanel?
		- add min/max responses?
	o QUESTION: can we get data that passes Bartlett test for testing? 
	o  Add checkbox of litter effect to a conditionalPanel

0.0.5
	o Added extra check for model parameter values:
		- Changed the internal value for 0's to .Machine$double.xmin * maxDose
		- if the amount of decimals is > 5, round to 0
		- otherwise, if the rounded value, if the input value has < 5 decimals 
		and the rounded value is == 0, keep value as is
		- TODO: change reading of mode from preData() 
	o Rounded table values to 3 decimals
	o Data tab > filter module:
		- Kept Data Overview and Plots
		- Added feasibility of BMD as extra text in UI:
			- QUESTION: Correct that DF is empty when BMD is not feasible? (would expect "No"
			as output)
		- Allowed feasibility check only for continuous data
	o Fixed reactivity
	o Reworked Advanced Plots tab 	
	
0.0.4
	o Updated UI of data tab:
		- Added select input for response in Data tab 
		- Data overview reacts to subsetting
	o Code sent by Jose:
		- Quantal data has no SD nor SE, don't want the BMD filter for those data? 
		- TODO: export the analysisData$loadedData from the filterServer
	o When too many samples are removed in the subsetting, the app doesn't run: when more than 
	1 sample is removed the console shows a warning from PREP_DATA:
	"The data do not contain values corresponding to the chosen BMR, lowering the specified value of q may be necessary"
		- QUESTION: add a warning? 
	o Only show 'Distribution' checkboxes when continuous data is used
	o Fixed bug Bridge sampling (takes a while to run)
	o Changed warnings on Model Parameters to feedbackDanger()
		Implemented extra warning for mode (most likely) (should be between min and max)
	o Disabled 'Fit model(s)' action button when feedback = danger
	o Put output of Model-averaged BMD in data table
	o Added download buttons for tables and plots
	(o In BMABMDR package: mistake in documentation of plot_prior function)
	o Added Advanced plots for both continuous as quantal data
		QUESTIONS:
		- Only prior_plot(Q) function?	
		- Different tabs for normal and lognormal models?
		- Download buttons?
		- Add subtitles according to template: 
		full model name (abbreviation) 
		abbreviation
		- Add list element models_included in output of full.laplaceQ_MA (as for continous data)
	o QUESTION: when the model parameters are specified (background, priorBMD and maxy) in 
	the PREP_DATA functions, the function that returns the models takes a long time/doesn't run (full.laplace_MA)
		- Is this normal or is there a bug? Also doesn't work in the BMABMDR test file
	o TODO: there is a bug when an analysis on data is run, the same analysis runs again when the 
	data is changed. 	

0.0.3 
	o Changed default Prior for BMD credible from 0.95 to 0.9
	o Added warnings when lowering default values Advanced settings
	o Moved Model Weights table to Advanced Settings (with action link)
	o Added extra field 'Distribution' with checkboxes 'Normal' and 'Log-normal':
		- When either of them is unchecked, the corresponding model weights (under Advanced Settings)
		are set to 0. 
	o Changed label from Maximum response to Maximum/minimum response in Model Parameters
	o Warnings for model parameters:
		- When min and max values of model parameters are missing
		- When value is < 0
		- When max < min
	o Only allowed prior model parameters specifications when a single response is selected.
		- The same default values are used for both normal and lognormal models (PREP_DATA_N gives same values as
		PREP_DATA_LN)
	o Model parameter values are rounded to 2 decimals 
		- Added feedbackSucces() to inform that smallest number in R is used instead of 0
		- Added smallest number to 0's on back end 
	o Model parameter values for priorBMD are normalized by dose in PREP_DATA_N (using max of the dose column in the supplied data)
		- Normalized values converted by multiplication with max(dose)
	o Implemented quantal data:
		- Works for default input values 
		- QUESTIONS:
			- Model weights table under 'Advanced Settings': there are different models for the quantal data,
			how would you like to have the interaction with the checkboxes under 'Distribution'? 
			- Model Parameters: - do you want the same warnings for the quantal data as for the continuous data?
										 	
									 	
		
 	o TODO: 
		- Disable 'Fit model(s)'- action button if there are warnings
 		

0.0.2
    o single value in UI for credible interval BMD
    o prefill prior distribution fields and prior weights
    o include data.frame for model output    
	o Changed labels:
	  -"Probability vector to compute credible interval for the BMD" into "Probability for BMD credible interval"
	  -"Laplace Sampling" into "Laplace approximation"
	  -"Trend check" into "Check for dose-response effect"
	o Removed options to change in app:
		- all HMC values
		- all Pert prior values
	o Added warnings for CES and no. of draws for posterior distribution	
0.0.1
    o initial commit
