0% found this document useful (0 votes)
41 views6 pages

Transfer Learning For Bayesian Optimization Revised

Uploaded by

samuel nzioka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views6 pages

Transfer Learning For Bayesian Optimization Revised

Uploaded by

samuel nzioka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Transfer Learning for Bayesian Optimization 1

TRANSFER LEARNING FOR BAYESIAN OPTIMIZATION

Name

Course

Instructor`s Name

Institution`s Name

Date
Transfer Learning for Bayesian Optimization 2

Transfer Learning for Bayesian Optimization

Transferring Bayesian optimization learning is a process of applying insights gained from

one Bayesian optimization task and applying it to a related task. The goal is to reduce the number

of lengthy and complicated evaluations required in the target task by taking advantage of the

knowledge acquired during the source task. Existing literature has provided insightful

information on improvements that have been made on transfer learning for Bayesian

optimization. This literature review evaluates the existing literature to determine some of the

areas that have not been adequately solved in effective transfer learning for improved

optimization performance.

There are a number of advancements that have been made in Bayesian optimization

(BO). There have been notable improvements in many areas under BO such as transfer learning.

Transfer learning leads to achievement of high-performance solutions by leveraging knowledge

from previous tasks in speeding up the optimization for target tasks (Wang et al., 2023). Transfer

learning in BO has evolved by integrating advanced probabilistic modeling techniques with

domain adaptation strategies, enabling the efficient transfer learning. This evolution has

enhanced the adaptability and scalability of BO methods, making them more effective in

scenarios with varying degrees of task similarity and data heterogeneity (Malte and Ratadiya,

2019). Despite these advancements, transfer learning remains an active field of research

especially on heterogeneity of search spaces. The insight from these studies suggest that the

incorporation of transfer learning has enhanced optimization process in the recent past.

Working with heterogeneous search spaces in transfer learning is complex. The choice of

the approach is dependent on the specific features between the source and target tasks (Bai et al.,

2023). The success of transfer learning in BO centers on the extent to which these tasks share
Transfer Learning for Bayesian Optimization 3

relevant knowledge or exhibit similarities in their respective search spaces. Similar underlying

structures such as common data distributions, feature spaces, or optimization landscapes

influence an effective and efficient transfer learning approach by allowing direct application of

the insights from the source task to the target task (Kouw and Loog, 2018). There is need for a

specific strategy when working with heterogeneous search spaces where notable differences exist

between the source and target tasks such as variations in data domains, feature representations,

and objective functions (Zhang et al., 2019). Feature adaptation, domain alignment, or selective

transfer learning are some of the proposals made to bridge the gap between the heterogenous

search spaces (Min et al., 2020). However, none of these techniques is universally applicable in

domain adaptation in real world scenarios (Shi et al., 2022). These studies affirm that the choice

of approach in transfer learning for BO is context-dependent to ensure relevance and adaptability

in the target task.

Selecting data for transfer learning in BO is challenging. The approach applied in

selecting the relevant and informative data from the source task to the target task is crucial

(Ruder and Plank, 2017). As an optimization procedure, BO is applied in optimizing data

selection. This technique makes a suggestion of data subsets from the source task to be evaluated

in the target task for improved performance (Perrone et al., 2019). However, this is an ideal

approach in data selection process that faces a number of challenges. One of the major

challenges is data representation. Even as transfer learning becomes more common, it has been

difficult to create a suitable design that captures the informative aspects from the source task to

the target task. Since BO is based on the evaluation of the objective function in creating an

effective probabilistic model, inaccurate representation of data leads to ineffective optimization

performance (Horváth et al., 2021). Other challenges noted include kernel choices, scalability,
Transfer Learning for Bayesian Optimization 4

model transferability, and real-world applicability (Bai et al., 2023). An effective framework in

addressing these challenges is yet to be formulated, which affects the effectiveness of data

selection in transfer learning in the modern-day field of science and technology.

The existing literature has provided valuable insights on the advancements in Bayesian

optimization, especially in transfer learning. Leveraging knowledge from previous tasks to new

target tasks has led to high-performance solutions in BO. Despite such, there is a notable

research gap in selecting data for transfer learning and data transfer challenges in Bayesian

Optimization (BO). Bridging this gap requires the development of robust techniques that can

systematically quantify, model, and adapt to these disparities, leading to more efficient transfer

learning in BO.
Transfer Learning for Bayesian Optimization 5

References

Bai, T., Li, Y., Shen, Y., Zhang, X., Zhang, W. and Cui, B., 2023. Transfer Learning for Bayesian

Optimization: A Survey. arXiv preprint arXiv:2302.05927. pp.1-35.

Horváth, S., Klein, A., Richtárik, P. and Archambeau, C., 2021. March. Hyperparameter transfer

learning with adaptive complexity. In International Conference on Artificial Intelligence

and Statistics. Valencia, Spain, April 25-27. Proceedings of Machine Learning Research.

pp. 1-8.

Kouw, W.M. and Loog, M., 2018. An introduction to domain adaptation and transfer

learning. arXiv preprint arXiv:1812.11806. pp. 1-11.

Malte, A. and Ratadiya, P., 2019. Evolution of transfer learning in natural language

processing. arXiv preprint arXiv:1910.07370. pp. 1-12.

Min, A.T.W., Gupta, A. and Ong, Y., 2020. Generalizing transfer Bayesian optimization to

source-target heterogeneity. IEEE Transactions on Automation Science and

Engineering, 18(4), pp.1-11.

Perrone, V., Shen, H., Seeger, M.W., Archambeau, C. and Jenatton, R., 2019. Learning search

spaces for bayesian optimization: Another view of hyperparameter transfer

learning. Advances in Neural Information Processing Systems, 32 (1), pp.1-17.

Ruder, S. and Plank, B., 2017. Learning to select data for transfer learning with bayesian

optimization. arXiv preprint arXiv:1707.05246. pp.1-11.


Transfer Learning for Bayesian Optimization 6

Shi, W., Zhang, L., Chen, W. and Pu, S. (2022). Universal domain adaptive object detector.

In Proceedings of the 30th ACM International Conference on Multimedia. Lisboa,

Portugal, October 10-14. Association for Computing Machinery. pp. 1-11.

Wang, X., Jin, Y., Schmitt, S. and Olhofer, M. (2023). Recent advances in Bayesian

optimization. ACM Computing Surveys, 55(13s), pp.1-36.

Zhang, J., Li, W., Ogunbona, P. and Xu, D. (2019). Recent advances in transfer learning for

cross-dataset visual recognition: A problem-oriented. ACM Computing Surveys

(CSUR), 52(1), pp.1-38.

You might also like