Abstract:Knowledge graphs aim to organize real-world entities, concepts, and their relations in a structured graph form. Traditional static knowledge graphs face challenges in data quality, accuracy, complexity, and dynamic updates. Real-world information evolves constantly, increasing the difficulty of maintenance. Recently, large language models achieve remarkable progress in semantic understanding and text generation. Their strong generalization ability across domains, modalities, and tasks brings new opportunities for knowledge graphs construction. This paper surveys recent advances in using large language models for building knowledge graphs. First, it introduces the basic concepts of knowledge graphs and large language models, and outlines a general framework for their integration. Then, it analyzes the progress and challenges of large language models in three key phases: knowledge extraction, knowledge fusion, and knowledge reasoning. Next, it discusses practical applications in knowledge-based question answering and retrieval-augmented generation systems. Finally, it summarizes development trends and open problems, providing insights for future research.