Catering to addressing the issues of insufficient reasoning capabilities of LLMs (large language models) and “hallucination” phenomenon in LLMs in the domain of the command and control (C2), this paper systematically reviews the technical paradigms of the chain of thought (CoT) in LLMs and chain of command and decision (CoCD) in AI agents. A method of integrating CoT in LLMs and CoCD from architectural and reasoning perspectives is proposed and validated through engineering practices. The study demonstrates that integrating CoT in LLMs with CoCD can enhance the cognitive reasoning capabilities of LLMs in command and decision, promoting a shift from experience-dependent to cognitively-driven approaches for such tasks. Finally, the paper discusses future trends in LLMs and AI agents in the domain of the command and control, suggesting potential solutions and research directions.