Model poisoning is a type of cyberattack where malicious actors intentionally manipulate the training data or process of an artificial intelligence (AI) model. By introducing carefully crafted, deceptive data, attackers can cause the model to make incorrect predictions or behave in unintended ways. This can compromise the reliability, security, and fairness of AI systems, with potentially serious consequences in areas like finance, healthcare, or security. Model poisoning is a significant concern in machine learning, especially in environments where multiple parties contribute to the training data, such as federated learning.