Abstract:Knowledge representation learning(KRL) based entity alignment represents multi-knowledge graphs(KGs) as low-dimensional embeddings and realizes entity alignment by measuring the similarities between entity vectors. Existing approaches, however, frequently primarily focus on the text information but ignore the image information, and lead entity feature information of the image to being underutilized. To address this problem, we propose an approach for image-text multi-modal entity alignment(ITMEA) via joint knowledge representation learning. The approach joints multi-modal(image and text) data, and embeds multi-model data into a uniform low-dimensional semantic space by a knowledge representation learning model which combines the translating embedding model(TransE) with the dynamic mapping matrix embedding model(TransD). In low-dimensional semantic space, the link-mapping relations among the aligned multi-modal entities can be learned iteratively from the seed set. The learned link-mapping relations can be applied to unaligned entities. In this way, the multi-modal entity alignment can be implemented. Experiment results on WN18-IMG datasets show that the ITMEA can achieve the multi-modal entity alignment, and it is useful for the construction and completion of the open knowledge graph.