BERT MUTATION: DEEP TRANSFORMER MODEL FOR MASKED UNIFORM MUTATION IN GENETIC PROGRAMMING

BERT Mutation: Deep Transformer Model for Masked Uniform Mutation in Genetic Programming

BERT Mutation: Deep Transformer Model for Masked Uniform Mutation in Genetic Programming

Blog Article

We introduce BERT mutation, a novel, domain-independent mutation operator for Genetic Programming (GP) that leverages advanced Natural Language Processing (NLP) techniques to improve convergence, particularly using the Masked Language Modeling approach.By combining the capabilities of deep reinforcement learning and the BERT transformer architecture, BERT mutation intelligently suggests node replacements within GP trees Recliner to enhance their fitness.Unlike traditional stochastic mutation methods, BERT mutation adapts dynamically by using historical fitness data to optimize mutation Flags decisions, resulting in more effective evolutionary improvements.

Through comprehensive evaluations across three benchmark domains, we demonstrate that BERT mutation significantly outperforms conventional and state-of-the-art mutation operators in terms of convergence speed and solution quality.This work represents a pivotal step toward integrating state-of-the-art deep learning into evolutionary algorithms, pushing the boundaries of adaptive optimization in GP.

Report this page