go back

Bayesian Evolutionary Optimization for Heterogeneously Expensive Multi-objective Optimization

Xilu Wang, "Bayesian Evolutionary Optimization for Heterogeneously Expensive Multi-objective Optimization", University of Surrey, 2022.

Abstract

Various multi-objective optimization algorithms have been proposed with a common assumption that the evaluation of each objective function takes the same period of time. Little attention has been paid to more general and realistic optimization scenarios where different objectives are evaluated by different computer simulations or physical experiments with different time complexities (latencies) and only a very limited number of function evaluations is allowed for the slow objective. Such problems are known as heterogeneously expensive multi-objective optimization problems (HE-MOPs). Accordingly, this thesis aims to present several pieces of work for handling HE-MOPs. To begin with, we provide a clear description of HE-MOPs. Then, some necessary preliminaries are introduced, including Bayesian optimization, Gaussian processes, acquisition functions and a representative multi-objective evolutionary algorithm. In particular, we focus on multi-objective Bayesian optimization, and provide a brief review of the existing work. As we turn to transfer learning techniques to solve HE-MOPs, an overview of Bayesian optimization with transfer learning is given. After that, we focus on bi-objective problems with different evaluation times, and present a surrogate assisted evolutionary algorithm along with a parameter-based transfer learning scheme (T-SAEA). While the surrogate for the cheap objective can be updated on sufficient training data, the surrogate for the expensive one is updated by either the training data set or a transfer learning approach. To fi nd out the transfer- able knowledge, a filter-based feature selection algorithm is used to capture the pivotal features of each objective, and then use the common important features as a carrier for knowledge transfer between the cheap and expensive objectives. Then, the correspond- ing parameters in the surrogate models are adaptively shared to enhance the quality of the surrogate models. The experimental results demonstrate that the proposed algorithm outperforms the compared algorithms on the bi-objective optimization problems whose objectives have a large difference in computational complexities. Moreover, we have investigated the instance based transfer learning scheme for surrogate- assisted evolutionary algorithms (SAEAs) to solve bi-objective optimization where objectives have non-uniform evaluation times. In the first scheme, a co-surrogate is adopted to model the functional relationship between the fast and slow objective functions and a transferable instance selection method is introduced to acquire useful knowledge from the search process of the fast objective. In the second scheme, the training data is augmented for the surrogate for the slow objective function by transferring knowledge from the fast one. Specifically, a hybrid domain adaptation method aligning the second-order statistics and marginal distributions across domains is introduced to generate promising samples in the decision space according to the search experience of the fast one. A Gaussian process model based co-training method is adopted to predict the value of the slow objective and those having a high confidence level are selected as the augmented synthetic training data, thereby enhancing the approximation quality of the surrogate of the slow objective. Finally, as only a few studies have been reported to address HE-MOPs, most of which focus on bi-objective problems, we aim to deal with HE-MOPs having more than two black-box and heterogeneous objectives. To this end, we develop a multi-objective Bayesian evolutionary optimization approach to HE-MOPs by exploiting the different data sets on the cheap and expensive objectives in HE-MOPs to alleviate the search bias caused by the heterogeneous evaluation costs for evaluating different objectives. To make the best use of two different training data sets, one with solutions evaluated on all objectives and the other with those only evaluated on the fast objectives, two separate Gaussian process models are constructed. In addition, a new acquisition function that mitigates search bias towards the fast objectives is suggested, thereby achieving a balance between convergence and diversity. We demonstrate the effectiveness of the proposed algorithm by testing it on widely used multi-/many-objective benchmark problems whose objectives are assumed to be heterogeneously expensive.



Download Bibtex file Per Mail Request

Search