Abstract:
Chinese characters embody millennia of cultural heritage, yet ethnic scripts like Yi urgently need digitization; traditional font craft is slow, and GAN-based transfer suffers stroke distortion, data hunger, and weak Chinese-Yi generalization. In this paper, we focus on realizing few-shot font style transfer between Chinese and Yi characters to achieve more competitive performance. First, the study integrates the self-attention mechanism into the generator design by utilizing the Layer Attention Network and the Context-aware Attention Network. Second, a paired trilingual dataset containing Chinese characters, English letters, and Yi script is constructed, enabling the model to learn style transfer from Chinese to English and Yi content. Third, a multiple-to-multiple training method is adopted: instead of only learning from boldface (Simhei) as the source style, the model randomly selects style fonts from the training set in each iteration to enhance generalization. Finally, topology-aware loss is introduced to reduce the occurrence of distorted characters and improve the structural integrity of generated characters. Experiments are conducted to verify the model’s performance in transferring Chinese and English font styles to Yi script, as well as its ability to handle highly artistic Chinese cursive scripts. Results show that the proposed model outperforms comparison models such as EMD, DFS and the base FTransGAN model in terms of stroke clarity and structural stability, effectively addressing the defects of existing models in cross-language font transfer and artistic font generation.