When multi-agent systems cooperate or compete, the joint information space will be enlarged and the efficiency of information extraction between agents will be reduced. In this paper, a multi-agent reinforcement learning strategy(FMAC) with filtering mechanism to filter information is adopted to enhance the ability of information communication between agents. By finding the related agents and calculating their information contribution according to the correlation, the method filters out the irrelevant agent information so as to realize the effective communication between agents in cooperative competition or mixed environment. At the same time, the centralized training decentralized execution method is adopted to solve the non-stationarity of environment. In this paper, experiments are carried out by comparing algorithms to verify that the improved algorithm improves the strategy iteration efficiency and generalization ability, and can maintain stable effects when the number of agents increases, which is conducive to the application of multi-agent reinforcement learning to a wider range of fields.