Abstract:In real-world applications, the scarcity of labeled data and the distributional differences between the source and target domains limit the generalization ability of models. Unsupervised domain adaptation (UDA) addresses this issue by reducing the distributional gap between domains, ensuring stable model performance in new environments. Over the past two decades, domain adaptation has been extensively studied in areas such as distribution alignment and feature transformation. However, existing surveys mostly focus on domain-invariant feature learning, with few systematically summarizing the literature from the perspective of inter-domain class differences. In response, this paper presents a comprehensive review of domain-invariant feature learning and cross-domain class matching, with a focus on the core issue of category space inconsistency. First, we introduce the basic concepts and mathematical definitions of distribution shift in domain adaptation, and categorize domain adaptation based on label set differences into closed-set, partial-set, open-set, and universal domain adaptation. Next, we provide a comprehensive review of existing methods from the perspectives of domain-invariant feature learning and cross-domain class matching. We then discuss various variants of domain adaptation, including unsupervised, multi-source, and domain generalization, and for the first time, introduce the problem of temporal domain adaptation/generalization in the survey. Finally, we summarize the applications of domain adaptation in fields such as natural language processing, computer vision, industrial time series, and recommendation systems, and outline future directions and challenges.