Explainable AI for Payment Fraud Detection
How we combined classical ML techniques with explainable AI to build transparent and effective fraud detection systems.
Explainable AI for Payment Fraud Detection
In the world of fraud detection, accuracy isn't everything. Understanding why a transaction is flagged as suspicious is equally important.
The Problem with Black Boxes
Traditional ML models often act as black boxes—they make predictions but don't explain their reasoning. This creates problems:
Our Approach
We combined classical ML algorithms with SHAP (SHapley Additive exPlanations) to create transparent predictions.
The Tech Stack
Results
Our hybrid approach achieved 94% precision while providing human-readable explanations for every prediction.
import shap
import xgboost as xgb
Train model
model = xgb.XGBClassifier()
model.fit(X_train, y_train)
Generate explanations
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(X_test)
Visualize
shap.summary_plot(shap_values, X_test)
Conclusion
Explainability isn't a luxury—it's a necessity for deploying ML in sensitive domains like fraud detection.
Jhury Kevin Lastre
Software Engineer & Cybersecurity Researcher
Currently pursuing a Masters in Cybersecurity at Kookmin University, researching 5G security and eSIM protocols. Leading OWASP Cebu.