2024-03-29T15:43:15Z
https://zenodo.org/oai2d
oai:zenodo.org:3855193
2020-05-27T20:20:34Z
user-ecole_itn
software
user-eu
Jiawen Kong
Wojtek Kowalczyk
Duc Anh Nguyen
Stefan Menzel
Thomas Bäck
2020-02-20
<p>This is the source code used in the paper below:</p>
<p>Jiawen Kong, Wojtek Kowalczyk, Duc Anh Nguyen, Stefan Menzel and Thomas Bäck, “Hyperparameter Optimisation for Improving Classification under Class Imbalance”, in 2019 IEEE Symposium Series on Computational Intelligence (SSCI), Xiamen, China, 6-9 December 2019, doi: 10.1109/SSCI44817.2019.9002679</p>
<p>Although the class-imbalance classification problem has caught a huge amount <br>
of attention, hyperparameter optimisation has not been studied in detail in <br>
this field. Both classification algorithms and resampling techniques involve <br>
some hyperparameters that can be tuned. This paper sets up several <br>
experiments and draws the conclusion that, compared to using default <br>
hyperparameters, applying hyperparameter optimisation for both <br>
classification algorithms and resampling approaches can produce the best <br>
results for classifying the imbalanced datasets. Moreover, this paper shows <br>
that data complexity, especially the overlap between classes, has a big impact <br>
on the potential improvement that can be achieved through hyperparameter <br>
optimisation. Results of our experiments also indicate that using resampling <br>
techniques cannot improve the performance for some complex datasets, which <br>
further emphasizes the importance of analyzing data complexity before dealing <br>
with imbalanced datasets.</p>
https://doi.org/10.5281/zenodo.3855193
oai:zenodo.org:3855193
eng
Zenodo
https://zenodo.org/communities/ecole_itn
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.3855192
info:eu-repo/semantics/openAccess
SSCI, The 2019 IEEE Symposium Series on Computational Intelligence, Xiamen, China, 6-9 December 2019
Class Imbalance
Hyperparameter Optimisation
Overlapping Classes
Hyperparameter Optimisation for Improving Classification under Class Imbalance
info:eu-repo/semantics/other
oai:zenodo.org:3859701
2020-05-27T20:20:32Z
user-ecole_itn
openaire_data
user-eu
Gan Ruan
Leandro L. Minku
Stefan Menzel
Bernhard Sendhoff
Xin Yao
2020-05-27
<p>This file is the output data obtained when running the experiments from the paper below:</p>
<p>Ruan, G., Minku, L., Menzel, S., Sendhoff, B., Yao., “Computational Study on Effectiveness of Knowledge Transfer in Dynamic Multi-objective Optimization” <em>2020 IEEE Congress on Evolutionary Computation</em></p>
<p>Transfer learning has been used for solving multiple optimization and dynamic multi-objective optimization problems, since transfer learning is believed to be able to transfer useful information from one problem instance to help solving another related problem instance. This paper aims to study how effective transfer learning is in dynamic multi-objective optimization (DMO). Through computation time analysis of transfer learning, we show that the ‘inner’ optimization problem introduced by transfer learning is very time-consuming. In order to enhance the efficiency, two alternatives are computationally investigated on a number of dynamic bi- and tri-objective test problems. Experimental results have shown that the greatly enhanced efficiency does not result in much degeneration on the performance of transfer learning. Considering the high computational cost of transfer learning, it is likely that the original purpose of using transfer learning in DMO might be negated. In other words, the computation time saved in optimization is eaten up by computationally expensive transfer learning. As a result, there is less gain than expected in the overall computational efficiency. To verify this, experiments have been conducted, regarding using computational cost of transfer learning to optimize randomly generated solutions. The results have demonstrated that the convergence and diversity of final solutions generated from the random solutions are significantly better than those generated from transferred solutions under the same total computational budget.</p>
https://doi.org/10.5281/zenodo.3859701
oai:zenodo.org:3859701
eng
Zenodo
https://zenodo.org/communities/ecole_itn
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.3859700
info:eu-repo/semantics/openAccess
Creative Commons Attribution Share Alike 4.0 International
https://creativecommons.org/licenses/by-sa/4.0/legalcode
IEEE CEC, 2020 IEEE Congress on Evolutionary Computation, Glasgow, UK, 19-24 July 2020
Evolutionary algorithms
transfer learning
dynamic multi-objective optimization
prediction-based method
Computational Study on Effectiveness of Knowledge Transfer in Dynamic Multi-objective Optimization
info:eu-repo/semantics/other
oai:zenodo.org:3854910
2020-05-27T20:20:32Z
user-ecole_itn
openaire_data
Sibghat Ullah
Hao Wang
Stefan Menzel
Thomas Bäck
Bernhard Sendhoff
2020-02-20
<p>This is the data and source code used in the paper below:</p>
<p>Sibghat Ullah, Hao Wang, Stefan Menzel, Bernhard Sendhoff and Thomas Bäck, “An Empirical Comparison of Meta-Modeling Techniques for Robust Design Optimization”, in 2019 IEEE Symposium Series on Computational Intelligence (SSCI), Xiamen, China, 6-9 December 2019, doi: 10.1109/SSCI44817.2019.9002805</p>
<p>This research investigates the potential of using meta-modeling techniques in the context of robust optimization namely optimization under uncertainty/noise. A systematic empirical comparison is performed for evaluating and comparing different meta-modeling techniques for robust optimization. The experimental setup includes three noise levels, six meta-modeling algorithms, and six benchmark problems from the continuous optimization domain, each for three different dimensionalities. Two robustness definitions: robust regularization and robust composition, are used in the experiments. The meta-modeling techniques are evaluated and compared with respect to the modeling accuracy and the optimal function values. The results clearly show that Kriging, Support Vector Machine and Polynomial regression perform excellently as they achieve high accuracy and the optimal point on the model landscape is close to the true optimum of test functions in most cases.</p>
https://doi.org/10.5281/zenodo.3854910
oai:zenodo.org:3854910
eng
Zenodo
https://zenodo.org/communities/ecole_itn
https://doi.org/10.5281/zenodo.3854909
info:eu-repo/semantics/openAccess
Creative Commons Attribution Share Alike 4.0 International
https://creativecommons.org/licenses/by-sa/4.0/legalcode
SSCI, The 2019 IEEE Symposium Series on Computational Intelligence, Xiamen, China, 6-9 December 2019
meta-modeling
surrogate-assisted optimization
robust optimization
quality engineering
machine learning
An Empirical Comparison of Meta-Modeling Techniques for Robust Design Optimization
info:eu-repo/semantics/other
oai:zenodo.org:5503895
2022-02-23T09:25:36Z
user-ecole_itn
software
FayKong
2021-09-13
<p>Improving Imbalanced Classification by Anomaly Detection</p>
https://doi.org/10.5281/zenodo.5503895
oai:zenodo.org:5503895
Zenodo
https://github.com/ECOLE-ITN/KongPPSN2020/tree/KongPPSN2020-v1.0
https://zenodo.org/communities/ecole_itn
https://doi.org/10.5281/zenodo.5503894
info:eu-repo/semantics/openAccess
Other (Open)
ECOLE-ITN/KongPPSN2020:
info:eu-repo/semantics/other
oai:zenodo.org:3859741
2020-05-27T20:20:34Z
user-ecole_itn
software
Sibghat Ullah
Zhao Xu
Hao Wang
Stefan Menzel
Bernhard Sendhoff
2020-05-27
<p>This is the source code used in the following paper:</p>
<p>Ullah, S., Xu, Z., Wang, H., Menzel, S., Sendhoff, B., "Exploring Clinical Time Series Forecasting with Meta-Features in Variational Recurrent Models" <em>2020 IEEE World Congress on Computational Intelligence </em></p>
<p>This paper investigates the effectiveness of Supplementary Medical Information, for improving the prediction of Variational Recurrent Models in Clinical Time Series Forecasting. </p>
https://doi.org/10.5281/zenodo.3859741
oai:zenodo.org:3859741
eng
Zenodo
https://zenodo.org/communities/ecole_itn
https://doi.org/10.5281/zenodo.3859740
info:eu-repo/semantics/openAccess
GNU General Public License v3.0 or later
https://www.gnu.org/licenses/gpl-3.0-standalone.html
IJCNN 2020, 2020 IEEE World Congress on Computational Intelligence, Glasgow, UK, 19-24th July 2020
time series forecasting
recurrent neural networks
deep latent-variable models
MIMIC III
Clinical Applications
Exploring Clinical Time Series Forecasting with Meta-Features in Variational Recurrent Models
info:eu-repo/semantics/other
oai:zenodo.org:3859594
2020-05-27T20:20:32Z
user-ecole_itn
openaire_data
user-eu
Gan Ruan
Leandro L. Minku
Stefan Menzel
Bernhard Sendhoff
Xin Yao
2020-02-20
<p>This file is the output data obtained when running the experiments of the paper below:</p>
<p>Ruan, G., Minku, L.L., Menzel, S., Sendhoff, B., Yao, X., "When and How to Transfer Knowledge in Dynamic Multi-objective Optimization," 2019 IEEE Symposium Series on Computational Intelligence (SSCI), Xiamen, China, 2019, pp. 2034-2041. </p>
<p> </p>
<p>Transfer learning has been used for solving multiple optimization and dynamic multi-objective optimization problems, since transfer learning is able to transfer useful information from one problem to help solving another related problem. This paper aims to investigate when and how transfer learning works or fails in dynamic multi-objective optimization. Through computational analyses on a number of dynamic bi- and tri-objective benchmark problems, we show that transfer learning fails on problems with fixed Pareto optimal solution sets and under small environmental changes. We also show that the Gaussian kernel function used in the existing transfer learning-based method is not always adequate. Therefore, transfer learning should be avoided when dealing with problems for which transfer learning fails and other kernel functions should be used when the Gaussian kernel is inadequate. This paper proposes novel strategies and kernel functions that can be used in such cases. Experimental studies have demonstrated the superiority of our proposed techniques to state-of-the-art methods, on a number of dynamic bi- and tri-objective test problems.</p>
https://doi.org/10.5281/zenodo.3859594
oai:zenodo.org:3859594
eng
Zenodo
https://zenodo.org/communities/ecole_itn
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.3859593
info:eu-repo/semantics/openAccess
Creative Commons Attribution Share Alike 4.0 International
https://creativecommons.org/licenses/by-sa/4.0/legalcode
SSCI, The 2019 IEEE Symposium Series on Computational Intelligence, Xiamen, China, 6-9 December 2019
Evolutionary algorithms
transfer learning
dynamic multi-objective optimization
prediction-based method
When and How to Transfer Knowledge in Dynamic Multi-objective Optimization
info:eu-repo/semantics/other
oai:zenodo.org:3855094
2020-05-27T20:20:34Z
user-ecole_itn
software
Jiawen Kong
Thiago Rios
Wojtek Kowalczyk
Stefan Menzel
Thomas Bäck
2020-05-06
<p>This file is the source code used in the paper below:</p>
<p>Jiawen Kong, Thiago Rios, Wojtek Kowalczyk, Stefan Menzel and Thomas Bäck, “On the Performance of Oversampling Techniques for Class Imbalance Problems” in the 24th Pacific-Asia Conference on Knowledge Discovery and Data Mining, Singapore, 11-14 May 2020, doi: 10.1007/978-3-030-47436-2_7</p>
<p>Although over 90 oversampling approaches have been developed in the imbalance learning domain, most of the empirical study and application work are still based on the “classical” resampling techniques. In this paper, several experiments on 19 benchmark datasets are set up to study the efficiency of six powerful oversampling approaches, including both “classical” and new ones. According to our experimental results, oversampling techniques that consider the minority class distribution (new ones) perform better in most cases and RACOG gives the best performance among the six reviewed approaches. We further validate our conclusion on our real-world inspired vehicle datasets and also find applying oversampling techniques can improve the performance by around 10%. In addition, seven data complexity measures are considered for the initial purpose of investigating the relationship between data complexity measures and the choice of resampling techniques. Although no obvious relationship can be abstracted in our experiments, we find F1v value, a measure for evaluating the overlap which most researchers ignore, has a strong negative correlation with the potential AUC value (after resampling).</p>
https://doi.org/10.5281/zenodo.3855094
oai:zenodo.org:3855094
eng
Zenodo
https://zenodo.org/communities/ecole_itn
https://doi.org/10.5281/zenodo.3855093
info:eu-repo/semantics/openAccess
GNU General Public License v3.0 or later
https://www.gnu.org/licenses/gpl-3.0-standalone.html
PAKDD, The 24th Pacific-Asia Conference on Knowledge Discovery and Data Mining, 11-14 May 2020
Class imbalance
Minority class distribution
Data complexity measures
On the Performance of Oversampling Techniques for Class Imbalance Problems
info:eu-repo/semantics/other