gsk-copay-fraud-detection / DOCUMENTATION.md
Harsh2396's picture
Upload DOCUMENTATION.md
1bdc5c6 verified

GSK Copay Card Fraud Detection System β€” Complete Technical Documentation

Author: ML Intern
Product: Trelegy Ellipta (configurable for Nucala / any GSK product)
Methodology: Hybrid Rules + Isolation Forest + SHAP Explainability
Architecture: Drug-agnostic, ground-truth-optional, production-ready


Table of Contents

  1. System Architecture Overview
  2. File-by-File Logic Deep Dive
  3. Data Flow Pipeline
  4. Business Rules Engine (15 Rules)
  5. Machine Learning Model
  6. Scoring & Risk Tiers
  7. SHAP Explainability
  8. Evaluation Framework
  9. Output Artifacts
  10. Design Principles & Decisions
  11. Extending the System

1. System Architecture Overview

The system is a hybrid fraud detection engine that combines:

  • Hard-coded business rules (domain expertise from GSK pharmacy operations)
  • Unsupervised anomaly detection (Isolation Forest β€” detects behavioral outliers)
  • Post-hoc explainability (SHAP β€” tells investigators WHY a claim was flagged)

Why Hybrid?

Component Strength Weakness How Hybrid Fixes It
Rules Precise, auditable, no false-positives on known patterns Cannot detect novel fraud IF catches unknown patterns
Isolation Forest Detects novel anomalies, unsupervised Black box, hard to audit SHAP + rules make it explainable
SHAP Human-readable reasons per claim Computationally expensive Only run on top anomalies

Pipeline Flow

Raw GSK Data (.csv / .txt / .txt.gz)
    ↓
[DATA INGESTION] β€” standardize 70+ columns, parse dates, clean financials
    ↓
[FEATURE ENGINEERING] β€” generate 60+ features (temporal, rolling, behavioral)
    ↓
[BUSINESS RULES] β€” apply 15 hard rules, compute severity weights
    ↓
[ML MODEL] β€” train Isolation Forest on rule-clean data, score ALL claims
    ↓
[PRIORITY SCORING] β€” blend IF + rules into unified 0-1 score
    ↓
[RISK TIERS] β€” low / medium / high / critical
    ↓
[SHAP] β€” explain top anomalies
    ↓
[EVALUATION] β€” supervised metrics (if labels) or unsupervised stats
    ↓
[OUTPUTS] β€” investigation queue, plots, model artifacts

2. File-by-File Logic Deep Dive

2.1 config.py

Purpose: Central configuration hub. Makes the system drug-agnostic.

PRODUCT_CONFIG dict

{
    "product_name": "Trelegy Ellipta",
    "days_supply_expected": 30,      # Expected days of supply per fill
    "quantity_expected": 1,           # Expected inhalers per fill
    "ndc_list": {                     # Valid NDCs with metadata
        "00173089314": {"strength": "100/62.5/25", "indication": "COPD/Asthma"},
        "00173088714": {"strength": "200/62.5/25", "indication": "Asthma"},
    },
    "valid_prescriber_specialties": [...],      # Whitelist
    "suspicious_prescriber_specialties": [...], # Blacklist
    "govt_insurance_blocked": True,
    "early_refill_threshold_days": 23,  # 75% of 30-day supply
    "max_fills_90d": 4,               # More than 4 fills in 90d = stockpiling
    "usual_customary_min/max": (500, 800),  # Expected price range
}

Design Decision: All product-specific thresholds live here. To switch from Trelegy to Nucala, only change this dict.

HIGH_RISK_REJECT_CODES

{"76": "MAXIMIZER CAP", "88": "DUR", "79": "REFILL TOO SOON"}

These are pharmacy system reject codes that are strong fraud indicators:

  • 76 = Patient hit their plan's maximizer cap (attempting to extract more benefit)
  • 88 = DUR (Drug Utilization Review) β€” clinical concern
  • 79 = Refill too soon β€” patient tried to refill before allowed

GOVT_INSURANCE_KEYWORDS

List of substrings used to auto-classify insurance plans as "Government" (Medicare/Medicaid/VA/etc.). Government insurance patients are not eligible for GSK copay cards β€” this is a program violation.

COLUMN_MAP

Maps 70+ raw column names to internal standardized names. Examples:

  • IQVIA_PATIENT_ID β†’ patient_id
  • DATE_OF_FILL β†’ fill_date
  • COPAY_AFTER_BENEFIT β†’ copay_after

Logic: The ingestion pipeline uppercases and strips all incoming column names, then looks them up in this map. Unmapped columns are kept with their original names but ignored by downstream logic.

CATEGORICAL_FEATURES / NUMERIC_FEATURES

Lists of all feature names expected by the ML pipeline. Used to:

  1. Separate columns for StandardScaler (numeric) vs OrdinalEncoder (categorical)
  2. Ensure the feature matrix fed to Isolation Forest is fully numeric

Why OrdinalEncoder over OneHot? OneHot would explode the dimensionality (70+ categories). Isolation Forest handles ordinal-encoded categoricals well because it only cares about split thresholds.

RISK_TIERS

{"low": (0.0, 0.3), "medium": (0.3, 0.6), "high": (0.6, 0.8), "critical": (0.8, 1.0)}

2.2 data_ingestion.py

Purpose: Transform raw GSK transaction files into a clean, standardized DataFrame ready for feature engineering.

parse_yyyymmdd(val)

Input: A value from a date column (could be integer 20231019, string "20231019", or NaN).
Logic:

  1. Convert to string, strip whitespace
  2. Check length == 8 and all digits
  3. Parse with pd.Timestamp.strptime(s, "%Y%m%d")
  4. Return pd.NaT on any failure

Why not pd.to_datetime directly? Raw dates are YYYYMMDD with no separators (e.g., 20231019). pd.to_datetime with format='%Y%m%d' works but we add extra validation to catch corrupted values.

load_raw_data(data_path, file_type)

Auto-detection logic:

.txt.gz or .gz  β†’ open with gzip, read as tab-separated
.txt or .tsv    β†’ pd.read_csv(sep="\t")
.csv (default)  β†’ pd.read_csv()

All columns are loaded as dtype=str initially to prevent pandas from guessing types on 70+ columns (which causes mixed-type warnings).

standardize_columns(df)

Logic:

  1. Build a lookup: raw_col.strip().upper() β†’ original_name
  2. For each entry in COLUMN_MAP, find the matching raw column
  3. If exact match fails, try fuzzy match (remove underscores, uppercase)
  4. Apply df.rename(columns=rename_map)

Why fuzzy matching? Real-world data sometimes has slight column name variations (IQVIA_PATIENT_ID vs IQVIA PATIENT ID).

parse_dates(df)

Parses these columns through parse_yyyymmdd:

  • date_written, fill_date, received_date, ingestion_time, min_date

Critical: fill_date is the primary temporal anchor for all feature engineering.

clean_financials(df)

Columns cleaned: copay_before, copay_after, benefit_amount, usual_customary, dispensing_fee, sales_tax, remaining_balance, number_of_benefits, total_copay, quantity, days_supply, rx_count, number_of_refills.

Cleaning steps per column:

  1. .astype(str) β€” everything starts as string from load
  2. .str.replace(r"[$,]", "", regex=True) β€” remove currency formatting
  3. .str.strip() β€” remove whitespace
  4. .replace({"": np.nan, "NULL": np.nan, ...}) β€” standardize missing values
  5. pd.to_numeric(errors="coerce") β€” convert to float, invalid β†’ NaN

derive_identifiers(df)

Creates synthetic IDs where real ones are missing:

  • claim_id = index + "_" + claim_number (guaranteed unique)
  • pharmacy_npi = pharmacy_nabp (fallback: pharmacy_iqvia_id)
  • prescriber_npi = prescriber_id
  • program_id = group_number (fallback: card_id)

Why synthetic? The downstream pipeline needs consistent ID columns. Real NPIs may not be in the raw data, so we proxy with available identifiers.

derive_drug_info(df)

  1. Strip dashes from drug_ndc (NDCs sometimes come as 00173-0893-14)
  2. Look up NDC in PRODUCT_CONFIG["ndc_list"]
  3. Add ndc_strength and ndc_indication columns

Invalid NDCs get "UNKNOWN" β€” these will likely be filtered out later.

derive_insurance(df)

Logic: For each row, concatenate primary_plan_name + primary_model_type, lowercase, and check against GOVT_INSURANCE_KEYWORDS.

Classification:

  • Contains government keyword β†’ "Government"
  • Contains commercial/private/employer β†’ "Commercial"
  • Contains self/cash β†’ "Self-Pay"
  • Else β†’ "Other"

Why this matters: Government-insured patients using copay cards is a program violation (anti-kickback concerns).

derive_patient_info(df)

  • patient_age = NaN (not in raw data, placeholder for future enrichment)
  • patient_state = pharmacy_state (fallback β€” assumes patient fills near home)

derive_reject_flags(df)

Creates 5 binary flags:

  • has_reject_code β€” any reject present
  • high_risk_reject β€” reject code in {76, 88, 79}
  • maximizer_reject β€” reject code == 76
  • address_reject β€” reject type == "ADDRESS REJECT"
  • business_rule_reject β€” reject type in business rule category

derive_linked_claim_flags(df)

  • is_adjusted β€” claim_type in {A, ADJUSTMENT}
  • has_linked_claim β€” linked_claim_id is not null
  • is_reversal β€” claim_type in {R, REVERSAL}

derive_submission_flags(df)

  • paper_submission β€” submission_method contains "PAPER"
  • electronic_submission β€” contains "ELECTRONIC"
  • mail_order β€” mail_order_indicator in {Y, YES, TRUE, 1}

Why paper submission is suspicious: Electronic claims are standard. Paper claims are harder to audit and may indicate attempts to bypass automated checks.

derive_daw_flags(df)

DAW (Dispense As Written) codes:

  • daw_brand_required β€” code == "1" (prescriber mandated brand)
  • daw_patient_request β€” code == "2" (patient requested brand)

derive_plan_switching(df)

Placeholders set to 1/0. Actual values are computed in feature engineering after patient-level grouping is available.

filter_valid_claims(df)

Filtering criteria:

  1. claim_type NOT in {R, REVERSAL, A, ADJUSTMENT} β€” we only analyze original claims
  2. patient_id is not null
  3. fill_date is not null
  4. drug_ndc is not null

Why exclude reversals/adjustments? They are corrections to original claims. Including them would create negative/duplicate records that distort behavioral features.


2.3 feature_engineering_v2.py

Purpose: Transform cleaned data into 60+ predictive features for the ML model.

add_temporal_features(df)

days_between_fills:

df.groupby("patient_id")["fill_date"].diff().dt.days

For each patient, compute days between consecutive fills. First fill per patient = NaN.

early_refill_flag:

df["days_between_fills"] < PRODUCT_CONFIG["early_refill_threshold_days"]

If a patient refills before 23 days (75% of 30-day supply), it's suspicious β€” they may be stockpiling or diverting medication.

days_since_first_fill:

(df["fill_date"] - first_fill_per_patient).dt.days

Patient tenure. Long-tenure patients behave differently from new ones.

claim_month, claim_dow, claim_quarter: Temporal seasonality features. Fraud patterns may cluster (e.g., end-of-quarter benefit exhaustion).

rx_lag_days:

(fill_date - date_written).dt.days

If a prescription was written months ago but filled today, it may be a stolen/script-shopped prescription.

add_rolling_window_features(df)

Uses pandas .rolling() with time-based windows ("7D", "30D", "90D").

Patient-level rolling features:

Feature Window Aggregation Fraud Signal
patient_fill_count_7d 7 days count Burst filling
patient_fill_count_30d 30 days count Excessive frequency
patient_fill_count_90d 90 days count Stockpiling
patient_copay_spend_7d/30d/90d 7/30/90 days sum(copay_after) Rapid spending
patient_total_claim_7d/30d/90d 7/30/90 days sum(usual_customary) High-value activity
patient_benefit_7d/30d/90d 7/30/90 days sum(benefit_amount) Benefit extraction

Pharmacy-level rolling:

  • pharmacy_claim_count_30d/90d β€” how busy is this pharmacy? Sudden spikes may indicate organized fraud.

Prescriber-level rolling:

  • prescriber_claim_count_30d/90d β€” high-volume prescribers may be pill mills.

Implementation note: Rolling windows are computed with pd.Grouper(key="fill_date", freq="30D") which buckets by calendar windows, then merged back via df.merge(..., how="left").

add_patient_behavioral_features(df)

unique_pharmacies_overall: Count distinct pharmacies per patient. >3 suggests "pharmacy hopping" (avoiding detection).

unique_programs_per_patient: Count distinct copay programs. >1 suggests "card stacking" (using multiple cards for same patient).

unique_prescribers_per_patient: Multiple prescribers for same patient may indicate doctor shopping.

total_fills_per_patient: Total lifetime fills. New patients with high fill counts are suspicious.

avg_days_between_fills / std_days_between_fills:

  • Low average = frequent refiller
  • Low std = robotic/organized pattern (real patients have irregular schedules)

max_fills_any_30d: Peak activity in any 30-day window.

copay_deviation_from_patient_mean:

abs(copay_after - patient_mean_copay)

If a patient's copay suddenly jumps (e.g., from $0 to $50), it may indicate a plan switch or benefit exhaustion.

total_benefit_per_patient: Cumulative benefit extracted. Very high = potential abuse.

add_pharmacy_features(df)

pharmacy_unique_patients: How many distinct patients use this pharmacy? Low number + high claims = targeted fraud.

pharmacy_total_claims: Total claim volume.

pharmacy_claims_per_patient_ratio:

pharmacy_total_claims / pharmacy_unique_patients

Ratio >> 1 means the pharmacy serves a small patient pool very heavily β€” possible "phantom pharmacy" or collusion.

pharmacy_avg_copay: Average copay at this pharmacy. Deviations from network average may indicate upcoding.

pharmacy_mail_order_pct: % of mail order claims. Legitimate specialty pharmacies have known baselines.

pharmacy_reject_rate: % of claims with reject codes. High reject rate = data quality issues or aggressive billing.

pharmacy_paper_submission_rate: % paper claims. High = suspicious (avoids electronic audit trail).

add_prescriber_features(df)

prescriber_unique_patients: Patient panel size. Very high = possible pill mill.

prescriber_total_claims: Total prescribing volume.

prescriber_specialty_valid:

prescriber_specialty.lower() in valid_specialties

A dermatologist prescribing Trelegy (a COPD/asthma inhaler) is medically unusual and warrants review.

add_reject_code_features(df)

patient_reject_count / patient_reject_rate: How often does this patient's claims get rejected? High rate = data problems or gaming the system.

patient_highrisk_reject_count / patient_maximizer_count: Counts of specifically bad rejects (76, 88, 79).

add_plan_switching_features(df)

unique_plans_per_patient: Distinct primary_plan_id values. >1 = plan switching.

plan_switch_flag: Binary indicator for >1 plan.

unique_bins_per_patient / bin_switch_flag: Same for BIN (Bank Identification Number). Rapid BIN changes may indicate attempts to find a plan that covers the copay card.

add_submission_features(df)

patient_paper_submission_rate: % of patient's claims submitted on paper. High = suspicious.

add_daw_features(df)

patient_daw_brand_count: How many times patient demanded brand (DAW=1 or 2). Copay cards only work on brand β€” excessive DAW may indicate copay card abuse.

add_linked_claim_features(df)

patient_adjusted_count / patient_linked_count: How many times has this patient's claims been adjusted or linked to reversals? High = unstable billing history.

add_drug_specific_features(df)

patient_ndc_count / ndc_switch_flag: Has the patient received different strengths (different NDCs)? This could indicate "strength switching" to extract more benefit.

govt_insurance_flag: Already computed in ingestion, preserved here.

quantity_anomaly / days_supply_anomaly: Does the claim deviate from expected quantity (1) or days supply (30)?

age_violation_flag: Placeholder. Would trigger if patient_age < 18.

new_patient_burst: Patient's first fill was within 7 days of enrollment AND they have >1 fill total. Suggests immediate abuse.

cross_state_fill: patient_state != pharmacy_state. May indicate traveling to find permissive pharmacies.

copay_zscore / total_claim_zscore / uc_zscore:

(col - patient_mean) / patient_std

Per-patient Z-scores. A claim that's 3Οƒ from a patient's own history is anomalous.

benefit_ratio:

benefit_amount / usual_customary

If the copay card paid 100% of usual & customary, that's suspicious β€” cards are supposed to supplement, not replace, insurance.

scale_and_encode(df)

Categorical handling:

  • OrdinalEncoder(handle_unknown="use_encoded_value", unknown_value=-1)
  • Unseen categories at inference time get -1 (handled gracefully)
  • Original values saved as _orig_{column} for rule evaluation

Numeric handling:

  • StandardScaler() β€” zero mean, unit variance
  • NaN filled with 0 before scaling

Why this order? Rules are applied BEFORE scaling because rules need human-readable thresholds (e.g., quantity != 1). Scaling is only for the ML model.


2.4 fraud_detection_pipeline_v2.py

Purpose: Orchestrates the entire pipeline β€” rules, model, scoring, SHAP, evaluation, plots.

apply_business_rules(df)

Applies 15 hard rules. Each rule is a binary column (0 or 1).

Rule 1: Early Refill

df["days_between_fills"] < threshold

Patient is refilling before 75% of expected supply consumed. Signal: stockpiling, diversion.

Rule 2: Impossible Quantity

quantity != expected_qty

Trelegy should be 1 inhaler per fill. Quantity of 2+ suggests data error or duplicate billing.

Rule 3: Wrong Days Supply

days_supply != expected_ds

Trelegy should be 30 days. 90-day fills (if not mail order) may indicate unusual prescribing.

Rule 4: Government Insurance

insurance_type == "Government"

Program violation. Copay cards cannot be used with Medicare/Medicaid.

Rule 5: Underage

patient_age < 18

Placeholder β€” Trelegy is approved for adults. Pediatric use via copay card is off-label and suspicious.

Rule 6: Duplicate Billing

df.duplicated(subset=["patient_id", "fill_date", "pharmacy_npi"], keep=False)

Same patient, same day, same pharmacy β€” possible duplicate claim submission.

Rule 7: NDC Switch

patient_ndc_count > 1

Patient received multiple strengths (100/62.5/25 AND 200/62.5/25). May indicate strength switching to maximize benefit.

Rule 8: Suspicious Specialty

prescriber_specialty not in valid_specialties

A cardiologist prescribing an inhaler is medically unusual. Possible prescriber collusion or stolen credentials.

Rule 9: Multi-Program

unique_programs_per_patient > 1

Patient enrolled in multiple copay programs simultaneously. Card stacking.

Rule 10: Excessive Fills (90d)

patient_fill_count_90d > 4

More than 4 fills in 90 days for a 30-day supply drug = stockpiling or diversion.

Rule 11: High-Risk Reject

reject_code in {76, 88, 79}

These reject codes indicate the claim was flagged by the pharmacy system itself.

Rule 12: Maximizer Cap

maximizer_reject == 1

Specifically reject code 76. Patient hit plan maximizer β€” may be attempting to extract copay card funds after insurance cap.

Rule 13: Paper Submission

paper_submission == 1

Avoids electronic audit trail. Common in fraudulent schemes.

Rule 14: Plan Switch

plan_switch_flag == 1

Patient changed insurance plans recently. May be searching for a plan that allows copay card stacking.

Rule 15: Linked Claim

has_linked_claim == 1

Claim is linked to a reversal or adjustment. Indicates unstable/b contested billing.

Severity Weights: Not all rules are equal. Government insurance and maximizer caps are weighted higher (2.0) than early refills (1.0):

severity_weights = {
    "rule_early_refill": 1.0,
    "rule_impossible_qty": 1.5,
    "rule_govt_insurance": 2.0,
    ...
}

Severity is normalized to [0, 1] by dividing by max possible severity.

train_isolation_forest(df_features, rule_clean_mask, contamination)

Key design decision: Train ONLY on rule-clean data.

X_clean = df_features.loc[rule_clean_mask].values  # rule_flag == 0
model.fit(X_clean)

Why? If we train on rule-flagged data, the model learns that "government insurance is normal" and stops flagging it. By training only on clean data, the model learns "normal patient behavior" and flags deviations.

Scoring ALL claims:

scores_raw = model.decision_function(df_features.values)
scores = -scores_raw  # flip so higher = more anomalous
scores_norm = (scores - min) / (max - min)  # [0, 1]

decision_function returns anomaly score where negative = more anomalous. We flip and normalize for interpretability.

compute_priority_score(df, if_scores)

Blended scoring formula:

priority_score = 0.50 * if_anomaly_score_norm + 
                 0.30 * rule_severity + 
                 0.20 * rule_flag
Component Weight Rationale
IF anomaly 50% Catches novel, behavioral fraud
Rule severity 30% Domain expertise, weighted by criticality
Rule flag 20% Binary "any rule broken" signal

Risk tier assignment:

< 0.3   β†’ "low"
< 0.6   β†’ "medium"
< 0.8   β†’ "high"
β‰₯ 0.8   β†’ "critical"

evaluate_with_ground_truth(df)

Only runs if is_fraud column exists.

Metrics computed:

  • AUPRC (Area Under Precision-Recall Curve) β€” primary metric for imbalanced fraud data
  • AUROC β€” overall discrimination ability
  • F1, Precision, Recall β€” at 0.5 threshold
  • Precision@K (K=100, 250, 500, 1000) β€” "Of the top K flagged claims, how many are actually fraud?"
  • Confusion matrix β€” full TP/FP/TN/FN breakdown
  • Classification report β€” per-class precision/recall/F1
  • Fraud by type β€” if fraud_type column exists, breakdown by fraud category

evaluate_unsupervised(df)

Runs when no ground truth labels exist.

Metrics:

  • anomaly_detection_rate β€” % of claims with IF score > 0.5
  • rule_flag_rate β€” % of claims flagged by any rule
  • risk_tier_distribution β€” count per tier

compute_shap_values(model, df_features, feature_names, sample_size)

SHAP (SHapley Additive exPlanations) provides human-readable reasons for model predictions.

Process:

  1. shap.TreeExplainer(model) β€” explainer specialized for tree-based models
  2. Sample background data (default 2000 rows) for speed
  3. explainer.shap_values(bg.values) β€” compute SHAP values
  4. If multi-output (some sklearn versions), take anomaly class

Output: DataFrame of SHAP values per feature per sample.

save_shap_summary / save_shap_csv

02_shap_summary.png: Beeswarm plot showing:

  • X-axis: SHAP value (impact on anomaly score)
  • Y-axis: Features ranked by importance
  • Color: Feature value (red = high, blue = low)

03_shap_bar_importance.png: Bar chart of mean absolute SHAP per feature.

feature_importance_shap.csv: Tabular export of feature importance for audit trails.

plot_evaluation_metrics(df, metrics, outdir)

01_evaluation_metrics.png (2Γ—2 grid):

Position Plot Description
Top-left Score distribution by tier Histogram of priority scores colored by risk tier
Top-right ROC curve TPR vs FPR, with AUROC label
Bottom-left PR curve Precision vs Recall, with AUPRC label
Bottom-right Tier counts Bar chart of claims per risk tier

plot_rule_breakdown(df, outdir)

04_rule_breakdown.png: Horizontal bar chart of how many claims each rule flagged.

plot_risk_tier_distribution(df, outdir)

05_risk_tier_distribution.png (1Γ—2 grid):

  • Left: Claim counts per tier
  • Right: Fraud rate per tier (if ground truth available)

2.5 run_all_v2.py

Purpose: CLI entry point. Validates arguments, creates directories, runs pipeline, prints summary.

CLI Arguments

Argument Default Description
--data-path required Path to raw GSK file
--file-type auto csv, txt, txt.gz, or auto-detect
--contamination 0.03 Expected fraud rate for Isolation Forest
--results-dir results Where to save CSVs and PNGs
--model-dir model Where to save PKL artifacts

Argument parsing: Uses argparse.RawDescriptionHelpFormatter to display usage examples in --help.

Directory creation:

Path(args.results_dir).mkdir(parents=True, exist_ok=True)
Path(args.model_dir).mkdir(parents=True, exist_ok=True)

Execution flow:

  1. Print banner with config summary
  2. Call run_pipeline(...) with all CLI args
  3. Print completion summary with:
    • Total claims scored
    • Rule-flagged count
    • Priority score range
    • AUPRC / AUROC / F1 (if labels exist)
    • Output file manifest

2.6 Legacy Files

These exist for backwards compatibility and testing without real data.

generate_synthetic_data.py

Creates fake GSK data with known fraud patterns embedded:

  • Randomly assigns reject codes, DAW codes, claim types
  • Injects is_fraud label (3% fraud rate)
  • Used for pipeline testing before real data is available

feature_engineering.py

Basic feature engineering for synthetic data (subset of v2 features).

fraud_detection_pipeline.py

Simple IF-only pipeline for synthetic data. No rules, no SHAP, no evaluation.

run_all.py

Legacy CLI that runs the synthetic pipeline with no arguments.


3. Data Flow Pipeline

Step-by-Step Transformation

Step 0: Raw File
  └── GSK_COPAY_TRANSACTION_DAILY_20250725.TXT.GZ
      └── 70+ columns, tab-separated, dates as YYYYMMDD

Step 1: load_raw_data()
  └── DataFrame (all strings, ~70 columns, N rows)

Step 2: standardize_columns()
  └── Columns renamed via COLUMN_MAP
  └── e.g., IQVIA_PATIENT_ID β†’ patient_id

Step 3: parse_dates()
  └── fill_date: 20231019 β†’ 2023-10-19 (Timestamp)

Step 4: clean_financials()
  └── copay_after: "$45.00" β†’ 45.0 (float)

Step 5: derive_*()
  └── Adds ~20 derived columns (flags, IDs, insurance type)

Step 6: filter_valid_claims()
  └── Removes reversals, adjustments, null records
  └── e.g., 100,000 β†’ 92,000 rows

Step 7: add_temporal_features()
  └── Adds days_between_fills, early_refill_flag, etc.

Step 8: add_rolling_window_features()
  └── Adds patient_fill_count_7d/30d/90d, etc.

Step 9: add_behavioral_features()
  └── Adds unique_pharmacies, total_fills, etc.

Step 10: add_pharmacy/prescriber/drug features
  └── Adds 30+ more features

Step 11: scale_and_encode()
  └── Categoricals β†’ OrdinalEncoder β†’ integers
  └── Numerics β†’ StandardScaler β†’ z-scores
  └── Output: fully numeric matrix for ML

Step 12: apply_business_rules()
  └── 15 binary rule columns + severity score

Step 13: train_isolation_forest()
  └── Fit on rule-clean rows only
  └── Score all rows β†’ anomaly scores

Step 14: compute_priority_score()
  └── Blend IF + rules β†’ priority_score [0, 1]
  └── Assign risk tiers

Step 15: SHAP + evaluation + plots
  └── Generate all outputs

4. Business Rules Engine (15 Rules)

Rule Taxonomy

Category Rules Detection Target
Temporal Abuse 1, 10 Early refills, stockpiling
Data Integrity 2, 3, 6 Wrong quantity, duplicate billing
Program Violation 4, 5, 9 Govt insurance, underage, card stacking
Provider Fraud 8 Suspicious prescriber specialty
Benefit Abuse 7, 12 NDC switching, maximizer caps
Submission Fraud 13 Paper claims
Plan Gaming 14 Plan/BIN switching
Rejects 11 System-rejected claims
Linked Claims 15 Reversals, adjustments

Severity Weighting Rationale

Weight Rules Reason
2.0 Govt insurance, underage, multi-program, maximizer Clear program violations with legal/compliance risk
1.5 Impossible qty, wrong DS, duplicate, NDC switch, excessive fills, high-risk reject, plan switch, suspicious specialty Strong fraud indicators but may have edge-case explanations
1.0 Early refill, paper submission, linked claim Indicators that need context to confirm

5. Machine Learning Model

Isolation Forest

IsolationForest(
    n_estimators=200,        # 200 trees (good balance of speed/accuracy)
    contamination=0.03,      # Assume 3% fraud rate (tunable)
    max_samples="auto",      # Use max(256, n_samples) per tree
    max_features=1.0,        # Use all features at each split
    bootstrap=False,         # No sampling with replacement
    random_state=42,         # Reproducibility
    n_jobs=-1,               # Use all CPU cores
)

How Isolation Forest Works

  1. Build random trees: Each tree randomly selects a feature and split value
  2. Anomalies are isolated faster: Unusual points (e.g., pharmacy with 1000 claims, 1 patient) reach leaf nodes in fewer splits
  3. Anomaly score: Average path length across all trees. Shorter path = more anomalous.
  4. Contamination parameter: Sets the threshold for "anomaly" classification. 0.03 = top 3% most anomalous are flagged.

Why Not Supervised?

  • Fraud labels are expensive to acquire (requires manual investigator review)
  • Fraud patterns evolve (supervised models go stale)
  • Unsupervised models detect novel schemes

Training Strategy

Critical: Train only on rule_flag == 0 (clean claims).

Clean claims: 85,000 rows β†’ Learn "normal patient behavior"
Rule-flagged:  7,000 rows β†’ Excluded from training
All claims:   92,000 rows β†’ Scored by trained model

If we included rule-flagged claims in training, the model would learn that government insurance claims are "normal" because they appear frequently in the training data. By excluding them, the model sees them as anomalous at inference time.


6. Scoring & Risk Tiers

Priority Score Formula

priority_score = 0.50 Γ— if_anomaly_score_norm + 
                 0.30 Γ— rule_severity + 
                 0.20 Γ— rule_flag

Component Breakdown

IF Anomaly (50%):

  • Captures behavioral patterns not covered by rules
  • Examples: patient visits 10 pharmacies in 30 days, fills at 2 AM every time
  • Range: [0, 1] after normalization

Rule Severity (30%):

  • Weighted sum of triggered rules, normalized
  • A claim breaking 3 high-weight rules scores ~0.5
  • Range: [0, 1]

Rule Flag (20%):

  • Binary: was ANY rule triggered?
  • Simple but effective β€” any rule break gets 0.2 base score
  • Range: {0, 1}

Risk Tier Actions

Tier Score Action Expected Volume
Low 0.0–0.3 No action ~70–80% of claims
Medium 0.3–0.6 Automated monitoring, trend alerts ~15–20%
High 0.6–0.8 Manual investigation queue ~3–5%
Critical 0.8–1.0 Immediate investigation, potential hold ~1–2%

7. SHAP Explainability

Why SHAP?

Investigators need to justify why a claim was flagged. SHAP provides:

  • Feature-level attribution: "This claim was flagged because the patient visited 5 pharmacies in 30 days"
  • Directionality: "High unique_pharmacies increased the score by +0.15"
  • Consistency: SHAP values are mathematically guaranteed to sum to the prediction

TreeExplainer for Isolation Forest

explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(background_data)

Background data: A sample of 2000 rows (not all data β€” too slow).

Interpreting SHAP Output

Beeswarm plot (02_shap_summary.png):

  • Each dot = one claim's SHAP value for one feature
  • X-position = impact on anomaly score (right = increases anomaly)
  • Color = feature value (red = high, blue = low)
  • Vertical spread = how many claims affected

Example interpretation:

"pharmacy_claims_per_patient_ratio is the #1 feature. Red dots on the right mean pharmacies with very high claims-per-patient ratios strongly drive anomaly scores. This suggests phantom pharmacies or organized fraud."


8. Evaluation Framework

With Ground Truth (is_fraud labels)

AUPRC (Primary Metric):

  • Precision-Recall AUC is better than ROC for imbalanced data
  • A random classifier on 3% fraud = AUPRC = 0.03
  • Good model = AUPRC > 0.30

AUROC:

  • Less sensitive to imbalance
  • Good model = AUROC > 0.80

Precision@K:

  • "Of the top K claims we send to investigators, what % are actually fraud?"
  • K=100: investigators can review 100 claims/day β€” optimize this
  • K=500: weekly batch review

Confusion Matrix:

              Predicted
              0      1
Actual 0   TN     FP
Actual 1   FN     TP

In fraud detection, we care more about minimizing FN (missing fraud) than FP (false alarms are manually reviewed anyway).

Without Ground Truth (Unsupervised)

Anomaly detection rate: % of claims with IF score > 0.5. Should be close to contamination parameter.

Rule flag rate: % of claims flagged by any rule. Domain benchmark β€” if this spikes, data quality may have degraded.

Tier distribution: Should be ~70% low, ~20% medium, ~7% high, ~3% critical. If critical spikes, investigate.


9. Output Artifacts

Investigation Queue

investigation_queue_top500.csv β€” The most important output.

Column Description
claim_id Unique claim identifier
patient_id Patient
fill_date When filled
pharmacy_npi / prescriber_npi Providers
drug_ndc Product
priority_score 0–1 fraud likelihood
risk_tier low/medium/high/critical
rule_flag_count How many rules broken
rule_severity Weighted severity
if_anomaly_score_norm ML anomaly score
rule_flag Any rule triggered
is_fraud Ground truth (if available)

Model Artifacts

File Purpose Reload Usage
isolation_forest_model.pkl Trained IF model Score new claims without retraining
scaler.pkl StandardScaler fit on training data Transform new numeric features identically
encoder.pkl OrdinalEncoder fit on training data Encode new categorical features identically
feature_names.pkl Ordered feature list Ensure column order consistency at inference

Inference workflow on new data:

  1. Ingest β†’ same data_ingestion.py
  2. Engineer features β†’ same feature_engineering_v2.py
  3. Load scaler + encoder + model from PKL files
  4. Transform features using loaded scaler/encoder
  5. model.decision_function(X_new) β†’ anomaly scores
  6. Apply saved priority score formula

10. Design Principles & Decisions

Drug-Agnostic Design

All product-specific parameters in config.py. To add Nucala:

  1. Change PRODUCT_CONFIG dict
  2. Update NDC list
  3. Adjust days_supply_expected (Nucala is every 4 weeks = 28 days)
  4. Update valid_prescriber_specialties (Nucala = asthma, adds pediatric allergists)
  5. No code changes needed

Ground-Truth Optional

The system works without is_fraud labels:

  • Rules flag known patterns
  • IF flags behavioral outliers
  • SHAP explains why
  • Unsupervised metrics track performance

When labels become available, the same pipeline computes AUPRC/AUROC.

Production Readiness

  • Deterministic: random_state=42 everywhere
  • Serializable: All transformers and model saved as joblib PKL
  • Reproducible: Same input β†’ same output
  • Auditable: Every score is explainable via SHAP + rule breakdown
  • Configurable: Contamination, thresholds, weights all via CLI

Performance Considerations

  • Rolling window features use pandas optimized Cython operations
  • Isolation Forest is O(n log n) β€” scales to millions of claims
  • SHAP only computed on 2000-row sample (not all data)
  • All file I/O uses parquet for intermediate steps (fast binary format)

11. Extending the System

Adding a New Business Rule

  1. Add rule logic in apply_business_rules():
df["rule_new_pattern"] = (df["some_feature"] > threshold).astype(int)
  1. Add severity weight in severity_weights dict:
"rule_new_pattern": 1.5,
  1. Add to rule_cols auto-detection (already handled by startswith("rule_"))

Adding a New Feature

  1. Add computation in feature_engineering_v2.py (pick appropriate add_*() function)
  2. Add feature name to NUMERIC_FEATURES or CATEGORICAL_FEATURES in config.py
  3. Re-run pipeline β€” model will automatically use the new feature

Switching to a Different ML Model

Replace train_isolation_forest() with any sklearn-compatible anomaly detector:

  • LocalOutlierFactor β€” density-based, good for clustered data
  • OneClassSVM β€” good for high-dimensional data
  • EllipticEnvelope β€” assumes Gaussian distribution

Keep the same interface: model.fit(X_clean), model.decision_function(X_all).

Adding Ground Truth Labels Later

Simply add an is_fraud column (0/1) to the input data before ingestion. The pipeline auto-detects it and computes supervised metrics.


Appendix: Complete File Manifest

trelegy_copay_fraud/
β”œβ”€β”€ config.py                          # Configuration & constants
β”œβ”€β”€ data_ingestion.py                  # Raw β†’ clean DataFrame
β”œβ”€β”€ feature_engineering_v2.py          # 60+ feature generation
β”œβ”€β”€ fraud_detection_pipeline_v2.py     # Full pipeline orchestration
β”œβ”€β”€ run_all_v2.py                      # CLI entry point
β”œβ”€β”€ requirements.txt                   # Python dependencies
β”œβ”€β”€ README.md                          # User-facing documentation
β”œβ”€β”€ DOCUMENTATION.md                   # This file β€” technical deep dive
β”œβ”€β”€ .gitignore                         # Git exclusions
β”œβ”€β”€ generate_synthetic_data.py         # Test data generator (legacy)
β”œβ”€β”€ feature_engineering.py             # Legacy synthetic features
β”œβ”€β”€ fraud_detection_pipeline.py        # Legacy simple pipeline
β”œβ”€β”€ run_all.py                         # Legacy CLI
β”œβ”€β”€ data/                              # User-provided raw data
β”‚   └── gsk_copay_transactions.csv
└── results/                           # Generated outputs
    β”œβ”€β”€ investigation_queue_top500.csv
    β”œβ”€β”€ scored_claims_full.csv
    β”œβ”€β”€ feature_importance_shap.csv
    β”œβ”€β”€ metrics.json
    β”œβ”€β”€ 01_evaluation_metrics.png
    β”œβ”€β”€ 02_shap_summary.png
    β”œβ”€β”€ 03_shap_bar_importance.png
    β”œβ”€β”€ 04_rule_breakdown.png
    └── 05_risk_tier_distribution.png
└── model/                             # Serialized artifacts
    β”œβ”€β”€ isolation_forest_model.pkl
    β”œβ”€β”€ scaler.pkl
    β”œβ”€β”€ encoder.pkl
    └── feature_names.pkl

End of Documentation