# Tests This directory contains tests for the meta_api_grabber project. ## Running Tests Install test dependencies: ```bash uv sync --extra test ``` Run all tests: ```bash uv run pytest ``` Run specific test file: ```bash uv run pytest tests/test_field_schema_validation.py -v ``` Run with verbose output: ```bash uv run pytest tests/test_field_schema_validation.py -v -s ``` ## Test Files ### `test_field_schema_validation.py` (Integration Test) This is a critical integration test that validates all fields requested by the grab_* methods in `scheduled_grabber.py` exist in the actual database schema. **What it does:** 1. Parses `db_schema.sql` to extract actual table columns 2. Checks fields requested by each grab method: - `grab_account_insights()` → `account_insights` table - `grab_campaign_insights()` → `campaign_insights` table - `grab_adset_insights()` → `adset_insights` table - `grab_campaign_insights_by_country()` → `campaign_insights_by_country` table 3. Verifies all requested fields exist in the corresponding database table **Why this test is important:** When new fields are added to the Meta API field lists, this test quickly alerts you if the corresponding database columns need to be added. Since fields are only added (never removed), the test helps catch schema mismatches early. **Test methods:** - `test_account_insights_fields()` - Validates account-level insight fields - `test_campaign_insights_fields()` - Validates campaign-level insight fields - `test_adset_insights_fields()` - Validates ad set-level insight fields - `test_campaign_insights_by_country_fields()` - Validates country breakdown fields - `test_all_tables_exist()` - Ensures all required insight tables exist - `test_schema_documentation()` - Prints out the parsed schema for reference **Output example:** ``` Table: account_insights Columns (17): account_id, actions, clicks, cost_per_action_type, cpc, cpm, cpp, ctr, ... Table: campaign_insights Columns (15): account_id, actions, campaign_id, clicks, cpc, cpm, ctr, ... ``` ## Writing Tests Use markers to categorize tests: ```python @pytest.mark.unit def test_something(): pass @pytest.mark.integration async def test_database_connection(): pass ``` Run only unit tests: ```bash uv run pytest -m unit ``` Run everything except integration tests: ```bash uv run pytest -m "not integration" ``` ## Schema Validation Workflow When you add new fields to a grab method: 1. **Add fields to `scheduled_grabber.py`:** ```python fields = [ ... AdsInsights.Field.new_field, # New field added ] ``` 2. **Run tests to see what's missing:** ```bash uv run pytest tests/test_field_schema_validation.py -v -s ``` 3. **Test output will show:** ``` adset_insights table missing columns: {'new_field'} Available: [account_id, actions, adset_id, ...] ``` 4. **Update `db_schema.sql` with the new column:** ```sql ALTER TABLE adset_insights ADD COLUMN IF NOT EXISTS new_field TYPE; ``` 5. **Run tests again to verify:** ```bash uv run pytest tests/test_field_schema_validation.py -v ```