Computational Study on Effectiveness of Knowledge Transfer in Dynamic Multi-objective Optimization
This file is the output data obtained when running the experiments from the paper below:
Ruan, G., Minku, L., Menzel, S., Sendhoff, B., Yao., “Computational Study on Effectiveness of Knowledge Transfer in Dynamic Multi-objective Optimization” 2020 IEEE Congress on Evolutionary Computation
Transfer learning has been used for solving multiple optimization and dynamic multi-objective optimization problems, since transfer learning is believed to be able to transfer useful information from one problem instance to help solving another related problem instance. This paper aims to study how effective transfer learning is in dynamic multi-objective optimization (DMO). Through computation time analysis of transfer learning, we show that the ‘inner’ optimization problem introduced by transfer learning is very time-consuming. In order to enhance the efficiency, two alternatives are computationally investigated on a number of dynamic bi- and tri-objective test problems. Experimental results have shown that the greatly enhanced efficiency does not result in much degeneration on the performance of transfer learning. Considering the high computational cost of transfer learning, it is likely that the original purpose of using transfer learning in DMO might be negated. In other words, the computation time saved in optimization is eaten up by computationally expensive transfer learning. As a result, there is less gain than expected in the overall computational efficiency. To verify this, experiments have been conducted, regarding using computational cost of transfer learning to optimize randomly generated solutions. The results have demonstrated that the convergence and diversity of final solutions generated from the random solutions are significantly better than those generated from transferred solutions under the same total computational budget.