ABSTRACT Objective: Healthcare AI models possess substantial abilities to enhance the quality of patient outcomes while identifying medical diagnoses and treatment methodologies. The existing biases which exist within AI models produce unequal medical outcomes across different population groups who face discrimination in their care. The research focuses on creating adaptable methods and uniform assessment criteria for healthcare AI bias detection methods and their bias reduction strategies. Materials and Methods: A comprehensive study of current AI bias detection methods occurred while examining crucial instances of healthcare algorithms showing bias that resulted in unequal patient results. A bias detection framework involving continuous monitoring and both explainable AI (XAI) and regulatory compliance exist to examine the system at multiple levels. The system creates a new method to measure both the magnitude of bias and its effects.