go back

Model-assisted robust optimization for continuous black-box problems

Sibghat Ullah, "Model-assisted robust optimization for continuous black-box problems", Leiden University, 2023.

Abstract

While solving real-world optimization problems, e.g., in the area of automotive engineering, building construction, and steel production, the issue of uncertainty and noise is frequently-encountered. Common sources of uncertainty and noise include search/decision variables (that describe the system to be optimized), the environmental variables or operating conditions the system is subject to, the evaluation of the (physical) system (or model of the system), and the preference in objectives and vagueness in constraints when modeling the (physical) system. It is therefore intuitive that uncertainty and noise surround the system in most practical scenarios of continuous optimization, and can significantly compromise the applicability of the optimization algorithms and the (nominal) optimal solutions obtained from these algorithms. In the ECOLE (Experience-based COmputation: Learning to optimisE) project, the focus of this thesis is on the parametric uncertainties in the search/decision variables that are assumed to be structurally symmetric, additive in nature, and can be modeled in a deterministic or a probabilistic fashion. Accounting for these uncertainties and noise leads us to robust optimization, which emphasizes on the solutions that are still optimal and useful in the face of such uncertainties and noise. Some of the most important performance indicators in the area of product engineering include shortening the product-development cycle, reducing the resource consumption during the complete process, and creating more balanced and innovative products. These practical aspects necessitate solving the robust optimization problem in an efficient manner. Since it is very costly to assess candidate solutions, we substitute the expensive function evaluations with a statistical model, which is referred to as the “surrogate model”, or the “meta model”. In this way, the model predicts the function response, and the optimization algorithm can query the function response instead of actually running the real production process. Solving robust optimization problems with surrogate models, we answer some of the most important research questions in Chapter 3. This chapter implements surrogate modeling with the help of a “one-shot optimization” strategy, and discusses the practical applicability of surrogate modeling to find robust solutions, and the related difficulties thereof. In this chapter, it is found that we can construct surrogate models with Kriging, Polynomials, and Support Vector Machines, with a reasonable (linear) sample size. The resulting surrogate model can find the robust solution in most situations, which is very close to the true baseline robust solution. Since in practical scenarios, high dimensionality can affect the performance of surrogate modeling, we devote the second half of Chapter 3 to focus on dimensionality reduction techniques. The dimensionality reduction techniques discussed include Principal Component Analysis, Kernel Principal Component Analysis, Autoencoders, and Variational Autoencoders. An empirical performance assessment indicates the suitability of Autoencoders and Principal Component Analysis to help construct a low dimensional surrogate model. A major manifestation of surrogate modeling, which is referred to as the “Bayesian optimization” algorithm, is discussed in Chapter 4. The major research results in this chapter include adapting the Bayesian optimization algorithm to find robust solutions in an efficient manner, as well as benchmarking its performance. To this end, it is found that the “Expected Improvement” criterion, and the “Moment-Generating Function of the Improvement” are good choices of sampling infill criteria to be utilized in the Bayesian optimization algorithm. Furthermore, the performance of the Bayesian optimization algorithm is deemed satisfactory in the light of a fixed budget and a fixed target analysis. A major point of concern in the context of robust optimization is the choice of a robustness criterion, which can have tremendous implications for the designers in the area of product engineering. This is due to the fact that the choice of robustness criterion can dictate the computational budget and quality of the optimal solution to a large degree. In Chapter 5, we focus on the computational aspect concerning the choice of the robustness criterion. Based on a broad spectrum of test cases, we assess and rank commonly employed robustness criteria with respect to a fixed budget and a fixed target analysis, in addition to the analysis on the average running time per iteration. The major findings from these analyses provide a novel perspective on the choice of the robustness criteria. For instance, it is found that the robustness criterion based on the “worst-case scenario” is also the most suitable criterion in terms of computational cost. Furthermore, the probabilistic robustness criteria have a higher variance in terms of quality of the solution, but lower variance in terms of utilization of the computational resources. Lastly, it is found that the probabilistic robustness criteria scale well with the dimensionality, whereas the deterministic criteria become inapplicable as the dimensionality increases. Some of our findings, reported above, are validated, when benchmarking our approaches on a real-world design optimization scenario based on car hood frames. These findings include the promising nature of Kriging as the modeling technique, as well as the heuristics commonly employed for determining the initial sample size. Furthermore, the “Moment-Generating Function of the Improvement” is corroborated as an effective sampling infill criterion for the Bayesian optimization algorithm.



Download Bibtex file Per Mail Request

Search