Compare commits
21 Commits
805c90f76e
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| a1d9a19d04 | |||
|
|
c5fa92c4ec | ||
|
|
a92c5b699f | ||
|
|
5f83ecd7ee | ||
|
|
511f381ff2 | ||
|
|
03ae7ea61a | ||
|
|
987553ef74 | ||
|
|
8f4753ff20 | ||
|
|
3ae4ce0d83 | ||
| e577945e75 | |||
| 68223f664a | |||
| 750ff0d4ff | |||
|
|
630f541b4f | ||
| 9ff1ee31d0 | |||
|
|
f962a1a83d | ||
| df6da80320 | |||
|
|
6127ea4650 | ||
| d38bb5d4a8 | |||
|
|
638b466daf | ||
|
|
d9260dff35 | ||
|
|
845a130aad |
@@ -1,2 +1,4 @@
|
|||||||
Uv managed python project that grabs data from the meta api and saves it in a timescaledb database.
|
Uv managed python project that grabs data from the meta api and saves it in a timescaledb database.
|
||||||
|
|
||||||
|
Always use uv run to execute python related stuff
|
||||||
|
|
||||||
|
|||||||
210
RATE_LIMITER_ENHANCEMENTS.md
Normal file
210
RATE_LIMITER_ENHANCEMENTS.md
Normal file
@@ -0,0 +1,210 @@
|
|||||||
|
# Meta API Rate Limiter Enhancements
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
Enhanced the rate limiter in [rate_limiter.py](src/meta_api_grabber/rate_limiter.py) to monitor **all** Meta API rate limit headers as documented in the [official Meta documentation](https://developers.facebook.com/docs/graph-api/overview/rate-limiting).
|
||||||
|
|
||||||
|
## New Headers Monitored
|
||||||
|
|
||||||
|
### 1. **X-App-Usage** (Platform Rate Limits)
|
||||||
|
Tracks application-level rate limits across all users.
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
- `call_count`: Percentage of calls made (0-100)
|
||||||
|
- `total_cputime`: Percentage of CPU time used (0-100)
|
||||||
|
- `total_time`: Percentage of total time used (0-100)
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"call_count": 28,
|
||||||
|
"total_time": 25,
|
||||||
|
"total_cputime": 25
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. **X-Ad-Account-Usage** (Ad Account Specific)
|
||||||
|
Tracks rate limits for specific ad accounts. **Stored per account ID** to support multiple accounts.
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
- `acc_id_util_pct`: Percentage of ad account usage (0-100)
|
||||||
|
- `reset_time_duration`: Time in seconds until rate limit resets
|
||||||
|
- `ads_api_access_tier`: Access tier (e.g., "standard_access", "development_access")
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"acc_id_util_pct": 9.67,
|
||||||
|
"reset_time_duration": 100,
|
||||||
|
"ads_api_access_tier": "standard_access"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note:** Metrics are tracked separately for each ad account in a dictionary keyed by account ID (e.g., `act_123456789`).
|
||||||
|
|
||||||
|
### 3. **X-Business-Use-Case-Usage** (Business Use Case Limits)
|
||||||
|
Tracks rate limits per business use case (ads_insights, ads_management, etc.).
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
- `business_id`: Business object ID
|
||||||
|
- `type`: Type of BUC (ads_insights, ads_management, custom_audience, etc.)
|
||||||
|
- `call_count`: Percentage of calls made (0-100)
|
||||||
|
- `total_cputime`: Percentage of CPU time (0-100)
|
||||||
|
- `total_time`: Percentage of total time (0-100)
|
||||||
|
- `estimated_time_to_regain_access`: Time in minutes until access is restored
|
||||||
|
- `ads_api_access_tier`: Access tier
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"66782684": [{
|
||||||
|
"type": "ads_management",
|
||||||
|
"call_count": 95,
|
||||||
|
"total_cputime": 20,
|
||||||
|
"total_time": 20,
|
||||||
|
"estimated_time_to_regain_access": 0,
|
||||||
|
"ads_api_access_tier": "development_access"
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. **x-fb-ads-insights-throttle** (Legacy)
|
||||||
|
Original header still supported for backward compatibility.
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
- `app_id_util_pct`: App usage percentage
|
||||||
|
- `acc_id_util_pct`: Account usage percentage
|
||||||
|
|
||||||
|
## Key Enhancements
|
||||||
|
|
||||||
|
### 1. Intelligent Throttling
|
||||||
|
The rate limiter now uses `estimated_time_to_regain_access` and `reset_time_duration` to calculate optimal delays:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# If we have estimated_time_to_regain_access from BUC header
|
||||||
|
if self.estimated_time_to_regain_access > 0:
|
||||||
|
delay = self.estimated_time_to_regain_access * 60 # Convert minutes to seconds
|
||||||
|
|
||||||
|
# If we have reset_time_duration from Ad Account header
|
||||||
|
elif self.reset_time_duration > 0:
|
||||||
|
delay = self.reset_time_duration * 0.5 # Use fraction as safety margin
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Comprehensive Error Code Detection
|
||||||
|
Expanded error code detection to include all Meta rate limit error codes:
|
||||||
|
|
||||||
|
- **4**: App rate limit
|
||||||
|
- **17**: User rate limit
|
||||||
|
- **32**: Pages rate limit
|
||||||
|
- **613**: Custom rate limit
|
||||||
|
- **80000-80014**: Business Use Case rate limits (Ads Insights, Ads Management, Custom Audience, Instagram, LeadGen, Messenger, Pages, WhatsApp, Catalog)
|
||||||
|
|
||||||
|
### 3. Debug Logging
|
||||||
|
All headers are now logged in DEBUG mode with detailed parsing information:
|
||||||
|
|
||||||
|
```python
|
||||||
|
logger.debug(f"X-App-Usage header: {header_value}")
|
||||||
|
logger.debug(f"Parsed X-App-Usage: {result}")
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Enhanced Statistics
|
||||||
|
The `get_stats()` and `print_stats()` methods now display comprehensive metrics from all headers:
|
||||||
|
|
||||||
|
```
|
||||||
|
======================================================================
|
||||||
|
RATE LIMITER STATISTICS
|
||||||
|
======================================================================
|
||||||
|
Total Requests: 0
|
||||||
|
Throttled Requests: 0
|
||||||
|
Rate Limit Errors: 0
|
||||||
|
|
||||||
|
X-App-Usage (Platform Rate Limits):
|
||||||
|
Call Count: 95.0%
|
||||||
|
Total CPU Time: 90.0%
|
||||||
|
Total Time: 88.0%
|
||||||
|
|
||||||
|
X-Ad-Account-Usage:
|
||||||
|
Account Usage: 97.5%
|
||||||
|
Reset Time Duration: 300s
|
||||||
|
API Access Tier: standard_access
|
||||||
|
|
||||||
|
X-Business-Use-Case-Usage:
|
||||||
|
Type: ads_insights
|
||||||
|
Call Count: 98.0%
|
||||||
|
Total CPU Time: 95.0%
|
||||||
|
Total Time: 92.0%
|
||||||
|
Est. Time to Regain: 15 min
|
||||||
|
|
||||||
|
Legacy (x-fb-ads-insights-throttle):
|
||||||
|
App Usage: 93.0%
|
||||||
|
Account Usage: 96.0%
|
||||||
|
|
||||||
|
Max Usage Across All Metrics: 98.0%
|
||||||
|
Currently Throttled: True
|
||||||
|
======================================================================
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### Enable Debug Logging
|
||||||
|
To see all header parsing in debug mode:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import logging
|
||||||
|
|
||||||
|
logging.basicConfig(
|
||||||
|
level=logging.DEBUG,
|
||||||
|
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Updating Usage with Account ID
|
||||||
|
When calling `update_usage()`, you can optionally provide an account ID to track per-account metrics:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Option 1: Provide account_id explicitly
|
||||||
|
limiter.update_usage(response, account_id='act_123456789')
|
||||||
|
|
||||||
|
# Option 2: Let the limiter try to extract it from the response
|
||||||
|
limiter.update_usage(response) # Will attempt to extract account_id
|
||||||
|
```
|
||||||
|
|
||||||
|
### Access New Metrics
|
||||||
|
All metrics are available through the `get_stats()` method:
|
||||||
|
|
||||||
|
```python
|
||||||
|
stats = limiter.get_stats()
|
||||||
|
|
||||||
|
print(f"App call count: {stats['app_call_count']}%")
|
||||||
|
print(f"Regain access in: {stats['estimated_time_to_regain_access']} min")
|
||||||
|
|
||||||
|
# Per-account metrics
|
||||||
|
for account_id, usage in stats['ad_account_usage'].items():
|
||||||
|
print(f"Account {account_id}:")
|
||||||
|
print(f" Usage: {usage['acc_id_util_pct']}%")
|
||||||
|
print(f" Reset in: {usage['reset_time_duration']}s")
|
||||||
|
print(f" API tier: {usage['ads_api_access_tier']}")
|
||||||
|
|
||||||
|
# Business use case details
|
||||||
|
for buc in stats['buc_usage']:
|
||||||
|
print(f"BUC {buc['type']}: {buc['call_count']}%")
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
Run the test script to see the rate limiter in action:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
uv run python test_rate_limiter.py
|
||||||
|
```
|
||||||
|
|
||||||
|
This will demonstrate:
|
||||||
|
- Parsing all four header types
|
||||||
|
- Intelligent throttling based on usage
|
||||||
|
- Comprehensive statistics display
|
||||||
|
- Debug logging output
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
- [Meta Graph API Rate Limiting](https://developers.facebook.com/docs/graph-api/overview/rate-limiting)
|
||||||
|
- [Meta Marketing API Best Practices](https://developers.facebook.com/docs/marketing-api/insights/best-practices/)
|
||||||
@@ -2,6 +2,10 @@
|
|||||||
|
|
||||||
Async data collection system for Meta's Marketing API with TimescaleDB time-series storage and dashboard support.
|
Async data collection system for Meta's Marketing API with TimescaleDB time-series storage and dashboard support.
|
||||||
|
|
||||||
|
```
|
||||||
|
docker build . -t gitea.99tales.net/jonas/meta_grabber:lastest
|
||||||
|
```
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
- **OAuth2 Authentication** - Automated token generation flow
|
- **OAuth2 Authentication** - Automated token generation flow
|
||||||
|
|||||||
13
analyticsdashboard-476613-eb23490ceed8.json
Normal file
13
analyticsdashboard-476613-eb23490ceed8.json
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
{
|
||||||
|
"type": "service_account",
|
||||||
|
"project_id": "analyticsdashboard-476613",
|
||||||
|
"private_key_id": "eb23490ceed829c7b0e14bdaac3c5accf8d008c9",
|
||||||
|
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQDAOdmUXs2KDY/r\ndRhbFjRdyZTTGnMVscxMvWf5N/quf0f+bOBrN5jz9fJKqs+kxS5556xgZ2r+2FGQ\nCeKCPk8I2kocmqU12rb5ncopef3+wMRTuHJNF0wit/rsPyfar7T5rC4isB/xP3Ss\nZcZ3jA/IfFnmfH4jdHu4okoKA7SqxvjqPpOjTmKhwV40YdG+mCjr7LJC7lxahfh9\n7w1YCC07w253P78goRgM5dFvcNY0THAsq4a4blnKQQ6lKUkxZ4DvIDIOyml4YZ23\nqevUSEa8KHiAkTOD6PzLP8rrJi9ovQpIGldr6HtOF+whGuh667Zezn2lRGOLj9zO\nNRMitaQxAgMBAAECggEABHNSbkpOR/FNxmMiT+Q7t3rV+e1ALN2+Sn3izQpWwM4s\nRIpmHRVfHTGHN+NXMLa//2KLGHr3JzSq7sLL0714PE7m2F1cOzBNJt+tsTgkgU7F\nPOBQWn3nmAYvxmMlRmg7BcIkGfl/Q9P28jwzqXY8sfpEZT8MoerYfSCExlE/pZSG\nIaqdSJXssWMu1ZoC4GWj76lzWafC+wjXwAgHlqpi3DnThAg0BYMVOZkPe4Xvmb85\n8is0hKQuc49UXtJ8V9a6zlAM+cKRzNrnRpEMdYagRSDPwwF+J+qh6UqoLnGND7UL\nWuWuss5lJZwIWjIZrVagqJhzBLvBbS6C0UHW7LeGgQKBgQDnCJ3HbYiqIWugDcSW\nMdPGx5Xk9mBvyjPDotBO/C5e8FBzxSp6Qegy9X6RYjIfuKCnVfCCsDoQJzy94RvW\nRn1q0WD1h2ov6LOXSNncOCa3k9jpRMlcnqj3NCkG52R6DPjDf3I/3N49dYVp6UcX\nENrHXBMLJBH4sYLnBIC5TXdE5QKBgQDU/6V5VhMdZc+hrIua+Y8DNznzBH5LeS4x\njx9B9AmwYoD4W4W87p7JwcJma7ZP9OUzO4qlk48ZFwTG9GLZGvyJV8DuY3WrfHCX\nDnoRMzsgS6vdxiRTSIwy0bTi33iuZcJo/KvfPL8dOgEUvQmanqeKz7SJ7O7KhmCk\n18QRMNZZXQKBgGcU4BkQFS8blEKogfMlrkD94jJzf1nBlVEPvvPO7v2rKapN6YL9\nDxZVlLBXaNfgb8XZwWL+MBnu99ocq2fysZjMbP9/+PABWsgAWDw6zYORMvH5oAJ0\nRB1wJ3IOIjWWvhO0NIysBnjTi8BStkZjXcofmduZr28P/MEIsEp9dt7FAoGAbruF\nNGJqR5NBcWS5o1TwY5SXfN6uJeCXAk7MykXrr5ZWREeYbJOFW5Bu1z5SJplDevIO\nb2waLcoIwsIUjZf5CBHmDEkKyJ9GDVIKZdzDdVPBwucaxW1m7ZiWOIhDPi9K9be+\nRq1XEgOwwi5QyuCGa6T1z+qsbf+USL6fgOxp00UCgYBJEJ1t6pyG10KfvmARMg01\n1r+D7EjwPfqcG8svtFX756EMqbYNm7YYYQJ1lG4CgHHI5KVb92DQ8kpxFvCARkra\nJMfKG4PzqkXK1Oqj4+RP6cGw1i4Z6wBNJtkvVRlONz03QVfxRL4UWNGjjMMw0jvh\nTE/wKaiR3JZtP3I0CHtIOg==\n-----END PRIVATE KEY-----\n",
|
||||||
|
"client_email": "googleanalyticsdienstkonto@analyticsdashboard-476613.iam.gserviceaccount.com",
|
||||||
|
"client_id": "109465128544817306545",
|
||||||
|
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
|
||||||
|
"token_uri": "https://oauth2.googleapis.com/token",
|
||||||
|
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
|
||||||
|
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/googleanalyticsdienstkonto%40analyticsdashboard-476613.iam.gserviceaccount.com",
|
||||||
|
"universe_domain": "googleapis.com"
|
||||||
|
}
|
||||||
31
google-ads.yaml
Normal file
31
google-ads.yaml
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
# Google Ads API Configuration
|
||||||
|
# For more information, visit: https://developers.google.com/google-ads/api/docs/client-libs/python/configuration
|
||||||
|
|
||||||
|
# Required: Your Google Ads developer token
|
||||||
|
developer_token: WW_IIk0Al4U-O-XtasOgew
|
||||||
|
|
||||||
|
# Use service account authentication with JSON key file
|
||||||
|
json_key_file_path: analyticsdashboard-476613-eb23490ceed8.json
|
||||||
|
|
||||||
|
# Required: Your Google Ads login customer ID (without dashes)
|
||||||
|
# This is typically your manager account ID
|
||||||
|
login_customer_id: 6226630160
|
||||||
|
|
||||||
|
use_proto_plus: False
|
||||||
|
|
||||||
|
# Optional: Logging configuration
|
||||||
|
logging:
|
||||||
|
version: 1
|
||||||
|
disable_existing_loggers: False
|
||||||
|
formatters:
|
||||||
|
default_fmt:
|
||||||
|
format: '[%(asctime)s - %(levelname)s] %(message).5000s'
|
||||||
|
datefmt: '%Y-%m-%d %H:%M:%S'
|
||||||
|
handlers:
|
||||||
|
default_handler:
|
||||||
|
class: logging.StreamHandler
|
||||||
|
formatter: default_fmt
|
||||||
|
loggers:
|
||||||
|
'':
|
||||||
|
handlers: [default_handler]
|
||||||
|
level: INFO
|
||||||
0
google_ads_key.json
Normal file
0
google_ads_key.json
Normal file
@@ -9,17 +9,26 @@ dependencies = [
|
|||||||
"alembic>=1.17.0",
|
"alembic>=1.17.0",
|
||||||
"asyncpg>=0.30.0",
|
"asyncpg>=0.30.0",
|
||||||
"facebook-business>=23.0.3",
|
"facebook-business>=23.0.3",
|
||||||
|
"google-ads>=28.3.0",
|
||||||
"python-dotenv>=1.1.1",
|
"python-dotenv>=1.1.1",
|
||||||
"requests-oauthlib>=2.0.0",
|
"requests-oauthlib>=2.0.0",
|
||||||
"sqlalchemy[asyncio]>=2.0.44",
|
"sqlalchemy[asyncio]>=2.0.44",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[project.optional-dependencies]
|
||||||
|
test = [
|
||||||
|
"pytest>=8.0.0",
|
||||||
|
"pytest-asyncio>=0.25.0",
|
||||||
|
]
|
||||||
|
|
||||||
[project.scripts]
|
[project.scripts]
|
||||||
meta-auth = "meta_api_grabber.auth:main"
|
meta-auth = "meta_api_grabber.auth:main"
|
||||||
meta-scheduled = "meta_api_grabber.scheduled_grabber:main"
|
meta-scheduled = "meta_api_grabber.scheduled_grabber:main"
|
||||||
meta-insights = "meta_api_grabber.insights_grabber:main"
|
meta-insights = "meta_api_grabber.insights_grabber:main"
|
||||||
meta-test-accounts = "meta_api_grabber.test_ad_accounts:main"
|
meta-test-accounts = "meta_api_grabber.test_ad_accounts:main"
|
||||||
|
meta-test-leads = "meta_api_grabber.test_page_leads:main"
|
||||||
meta-token = "meta_api_grabber.token_manager:main"
|
meta-token = "meta_api_grabber.token_manager:main"
|
||||||
|
google-ads-test = "meta_api_grabber.test_google_ads_accounts:main"
|
||||||
|
|
||||||
[build-system]
|
[build-system]
|
||||||
requires = ["hatchling"]
|
requires = ["hatchling"]
|
||||||
|
|||||||
@@ -4,6 +4,7 @@ Handles time-series data with metadata caching.
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
|
import logging
|
||||||
import os
|
import os
|
||||||
from datetime import datetime, date
|
from datetime import datetime, date
|
||||||
from typing import Any, Dict, List, Optional
|
from typing import Any, Dict, List, Optional
|
||||||
@@ -11,6 +12,9 @@ from typing import Any, Dict, List, Optional
|
|||||||
import asyncpg
|
import asyncpg
|
||||||
from dotenv import load_dotenv
|
from dotenv import load_dotenv
|
||||||
|
|
||||||
|
# Set up logger
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class TimescaleDBClient:
|
class TimescaleDBClient:
|
||||||
"""Async client for TimescaleDB operations with metadata caching."""
|
"""Async client for TimescaleDB operations with metadata caching."""
|
||||||
@@ -171,18 +175,27 @@ class TimescaleDBClient:
|
|||||||
status: Campaign status
|
status: Campaign status
|
||||||
objective: Campaign objective
|
objective: Campaign objective
|
||||||
"""
|
"""
|
||||||
|
|
||||||
query = """
|
query = """
|
||||||
INSERT INTO campaigns (campaign_id, account_id, campaign_name, status, objective, updated_at)
|
INSERT INTO campaigns (campaign_id, account_id, campaign_name, status, objective, updated_at)
|
||||||
VALUES ($1, $2, $3, $4, $5, NOW())
|
VALUES ($1, $2, $3, $4, $5, NOW())
|
||||||
ON CONFLICT (campaign_id)
|
ON CONFLICT (campaign_id)
|
||||||
DO UPDATE SET
|
DO UPDATE SET
|
||||||
campaign_name = EXCLUDED.campaign_name,
|
campaign_name = CASE
|
||||||
|
WHEN EXCLUDED.campaign_name = 'Unknown' THEN campaigns.campaign_name
|
||||||
|
ELSE EXCLUDED.campaign_name
|
||||||
|
END,
|
||||||
status = COALESCE(EXCLUDED.status, campaigns.status),
|
status = COALESCE(EXCLUDED.status, campaigns.status),
|
||||||
objective = COALESCE(EXCLUDED.objective, campaigns.objective),
|
objective = COALESCE(EXCLUDED.objective, campaigns.objective),
|
||||||
updated_at = NOW()
|
updated_at = NOW()
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
logger.debug(f"Executing query with params: {[campaign_id, account_id, campaign_name, status, objective]}")
|
||||||
|
|
||||||
async with self.pool.acquire() as conn:
|
async with self.pool.acquire() as conn:
|
||||||
await conn.execute(query, campaign_id, account_id, campaign_name, status, objective)
|
result = await conn.execute(query, campaign_id, account_id, campaign_name, status, objective)
|
||||||
|
logger.debug(f"Query result: {result}")
|
||||||
|
|
||||||
|
|
||||||
async def upsert_adset(
|
async def upsert_adset(
|
||||||
self,
|
self,
|
||||||
@@ -297,7 +310,6 @@ class TimescaleDBClient:
|
|||||||
account_id: str,
|
account_id: str,
|
||||||
data: Dict[str, Any],
|
data: Dict[str, Any],
|
||||||
date_preset: str = "today",
|
date_preset: str = "today",
|
||||||
cache_metadata: bool = True,
|
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Insert campaign-level insights data.
|
Insert campaign-level insights data.
|
||||||
@@ -308,10 +320,9 @@ class TimescaleDBClient:
|
|||||||
account_id: Ad account ID
|
account_id: Ad account ID
|
||||||
data: Insights data dictionary from Meta API
|
data: Insights data dictionary from Meta API
|
||||||
date_preset: Date preset used
|
date_preset: Date preset used
|
||||||
cache_metadata: If True, automatically cache campaign metadata from insights data
|
|
||||||
"""
|
"""
|
||||||
# Cache campaign metadata if requested and available in the insights data
|
# Auto-cache campaign metadata if available in the insights data
|
||||||
if cache_metadata and data.get("campaign_name"):
|
if data.get("campaign_name"):
|
||||||
await self.upsert_campaign(
|
await self.upsert_campaign(
|
||||||
campaign_id=campaign_id,
|
campaign_id=campaign_id,
|
||||||
account_id=account_id,
|
account_id=account_id,
|
||||||
@@ -376,7 +387,6 @@ class TimescaleDBClient:
|
|||||||
account_id: str,
|
account_id: str,
|
||||||
data: Dict[str, Any],
|
data: Dict[str, Any],
|
||||||
date_preset: str = "today",
|
date_preset: str = "today",
|
||||||
cache_metadata: bool = True,
|
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Insert ad set level insights data.
|
Insert ad set level insights data.
|
||||||
@@ -388,21 +398,11 @@ class TimescaleDBClient:
|
|||||||
account_id: Ad account ID
|
account_id: Ad account ID
|
||||||
data: Insights data dictionary from Meta API
|
data: Insights data dictionary from Meta API
|
||||||
date_preset: Date preset used
|
date_preset: Date preset used
|
||||||
cache_metadata: If True, automatically cache adset/campaign metadata from insights data
|
|
||||||
"""
|
"""
|
||||||
# Cache metadata if requested and available in the insights data
|
# Auto-cache adset metadata if available in the insights data
|
||||||
if cache_metadata:
|
# Note: Campaign should already exist from cache_campaigns_metadata or grab_campaign_insights
|
||||||
# First ensure campaign exists (adset references campaign)
|
# If it doesn't exist, the foreign key constraint will fail with a clear error
|
||||||
# We don't have campaign name in adset insights, so only create if needed
|
# This is intentional - we should never silently create campaigns with 'Unknown' names
|
||||||
await self.upsert_campaign(
|
|
||||||
campaign_id=campaign_id,
|
|
||||||
account_id=account_id,
|
|
||||||
campaign_name='Unknown', # Campaign name not in adset insights
|
|
||||||
status=None,
|
|
||||||
objective=None,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Then cache adset metadata if available
|
|
||||||
if data.get("adset_name"):
|
if data.get("adset_name"):
|
||||||
await self.upsert_adset(
|
await self.upsert_adset(
|
||||||
adset_id=adset_id,
|
adset_id=adset_id,
|
||||||
@@ -459,6 +459,74 @@ class TimescaleDBClient:
|
|||||||
ctr, cpc, cpm, actions, date_preset, date_start, date_stop
|
ctr, cpc, cpm, actions, date_preset, date_start, date_stop
|
||||||
)
|
)
|
||||||
|
|
||||||
|
async def insert_campaign_insights_by_country(
|
||||||
|
self,
|
||||||
|
time: datetime,
|
||||||
|
campaign_id: str,
|
||||||
|
account_id: str,
|
||||||
|
country: str,
|
||||||
|
data: Dict[str, Any],
|
||||||
|
date_preset: str = "today",
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Insert campaign-level insights data broken down by country.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
time: Timestamp for the data point
|
||||||
|
campaign_id: Campaign ID
|
||||||
|
account_id: Ad account ID
|
||||||
|
country: ISO 2-letter country code
|
||||||
|
data: Insights data dictionary from Meta API
|
||||||
|
date_preset: Date preset used
|
||||||
|
"""
|
||||||
|
query = """
|
||||||
|
INSERT INTO campaign_insights_by_country (
|
||||||
|
time, campaign_id, account_id, country, impressions, clicks, spend, reach,
|
||||||
|
ctr, cpc, cpm, actions, date_preset, date_start, date_stop, fetched_at
|
||||||
|
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15, NOW())
|
||||||
|
ON CONFLICT (time, campaign_id, country)
|
||||||
|
DO UPDATE SET
|
||||||
|
impressions = EXCLUDED.impressions,
|
||||||
|
clicks = EXCLUDED.clicks,
|
||||||
|
spend = EXCLUDED.spend,
|
||||||
|
reach = EXCLUDED.reach,
|
||||||
|
ctr = EXCLUDED.ctr,
|
||||||
|
cpc = EXCLUDED.cpc,
|
||||||
|
cpm = EXCLUDED.cpm,
|
||||||
|
actions = EXCLUDED.actions,
|
||||||
|
date_preset = EXCLUDED.date_preset,
|
||||||
|
date_start = EXCLUDED.date_start,
|
||||||
|
date_stop = EXCLUDED.date_stop,
|
||||||
|
fetched_at = NOW()
|
||||||
|
"""
|
||||||
|
|
||||||
|
impressions = int(data.get("impressions", 0)) if data.get("impressions") else None
|
||||||
|
clicks = int(data.get("clicks", 0)) if data.get("clicks") else None
|
||||||
|
spend = float(data.get("spend", 0)) if data.get("spend") else None
|
||||||
|
reach = int(data.get("reach", 0)) if data.get("reach") else None
|
||||||
|
ctr = float(data.get("ctr", 0)) if data.get("ctr") else None
|
||||||
|
cpc = float(data.get("cpc", 0)) if data.get("cpc") else None
|
||||||
|
cpm = float(data.get("cpm", 0)) if data.get("cpm") else None
|
||||||
|
|
||||||
|
# Extract date range from Meta API response and convert to date objects
|
||||||
|
from datetime import date as Date
|
||||||
|
date_start = None
|
||||||
|
date_stop = None
|
||||||
|
if data.get("date_start"):
|
||||||
|
date_start = Date.fromisoformat(data["date_start"])
|
||||||
|
if data.get("date_stop"):
|
||||||
|
date_stop = Date.fromisoformat(data["date_stop"])
|
||||||
|
|
||||||
|
import json
|
||||||
|
actions = json.dumps(data.get("actions", [])) if data.get("actions") else None
|
||||||
|
|
||||||
|
async with self.pool.acquire() as conn:
|
||||||
|
await conn.execute(
|
||||||
|
query,
|
||||||
|
time, campaign_id, account_id, country, impressions, clicks, spend, reach,
|
||||||
|
ctr, cpc, cpm, actions, date_preset, date_start, date_stop
|
||||||
|
)
|
||||||
|
|
||||||
# ========================================================================
|
# ========================================================================
|
||||||
# QUERY HELPERS
|
# QUERY HELPERS
|
||||||
# ========================================================================
|
# ========================================================================
|
||||||
|
|||||||
@@ -14,10 +14,17 @@ ALTER TABLE IF EXISTS account_insights ADD COLUMN IF NOT EXISTS date_stop DATE;
|
|||||||
|
|
||||||
ALTER TABLE IF EXISTS campaign_insights ADD COLUMN IF NOT EXISTS date_start DATE;
|
ALTER TABLE IF EXISTS campaign_insights ADD COLUMN IF NOT EXISTS date_start DATE;
|
||||||
ALTER TABLE IF EXISTS campaign_insights ADD COLUMN IF NOT EXISTS date_stop DATE;
|
ALTER TABLE IF EXISTS campaign_insights ADD COLUMN IF NOT EXISTS date_stop DATE;
|
||||||
|
ALTER TABLE IF EXISTS campaign_insights ADD COLUMN IF NOT EXISTS frequency NUMERIC(10, 4);
|
||||||
|
ALTER TABLE IF EXISTS campaign_insights ADD COLUMN IF NOT EXISTS cpp NUMERIC(10, 4);
|
||||||
|
ALTER TABLE IF EXISTS campaign_insights ADD COLUMN IF NOT EXISTS cost_per_action_type JSONB;
|
||||||
|
|
||||||
ALTER TABLE IF EXISTS adset_insights ADD COLUMN IF NOT EXISTS date_start DATE;
|
ALTER TABLE IF EXISTS adset_insights ADD COLUMN IF NOT EXISTS date_start DATE;
|
||||||
ALTER TABLE IF EXISTS adset_insights ADD COLUMN IF NOT EXISTS date_stop DATE;
|
ALTER TABLE IF EXISTS adset_insights ADD COLUMN IF NOT EXISTS date_stop DATE;
|
||||||
|
|
||||||
|
ALTER TABLE IF EXISTS campaign_insights_by_country ADD COLUMN IF NOT EXISTS frequency NUMERIC(10, 4);
|
||||||
|
ALTER TABLE IF EXISTS campaign_insights_by_country ADD COLUMN IF NOT EXISTS cpp NUMERIC(10, 4);
|
||||||
|
ALTER TABLE IF EXISTS campaign_insights_by_country ADD COLUMN IF NOT EXISTS cost_per_action_type JSONB;
|
||||||
|
|
||||||
-- ============================================================================
|
-- ============================================================================
|
||||||
-- METADATA TABLES (Regular PostgreSQL tables for caching)
|
-- METADATA TABLES (Regular PostgreSQL tables for caching)
|
||||||
-- ============================================================================
|
-- ============================================================================
|
||||||
@@ -115,14 +122,17 @@ CREATE TABLE IF NOT EXISTS campaign_insights (
|
|||||||
clicks BIGINT,
|
clicks BIGINT,
|
||||||
spend NUMERIC(12, 2),
|
spend NUMERIC(12, 2),
|
||||||
reach BIGINT,
|
reach BIGINT,
|
||||||
|
frequency NUMERIC(10, 4),
|
||||||
|
|
||||||
-- Calculated metrics
|
-- Calculated metrics
|
||||||
ctr NUMERIC(10, 6),
|
ctr NUMERIC(10, 6),
|
||||||
cpc NUMERIC(10, 4),
|
cpc NUMERIC(10, 4),
|
||||||
cpm NUMERIC(10, 4),
|
cpm NUMERIC(10, 4),
|
||||||
|
cpp NUMERIC(10, 4), -- Cost per reach
|
||||||
|
|
||||||
-- Actions
|
-- Actions
|
||||||
actions JSONB,
|
actions JSONB,
|
||||||
|
cost_per_action_type JSONB,
|
||||||
|
|
||||||
-- Metadata
|
-- Metadata
|
||||||
date_preset VARCHAR(50),
|
date_preset VARCHAR(50),
|
||||||
@@ -163,6 +173,7 @@ CREATE TABLE IF NOT EXISTS adset_insights (
|
|||||||
cpc NUMERIC(10, 4),
|
cpc NUMERIC(10, 4),
|
||||||
cpm NUMERIC(10, 4),
|
cpm NUMERIC(10, 4),
|
||||||
|
|
||||||
|
|
||||||
-- Actions
|
-- Actions
|
||||||
actions JSONB,
|
actions JSONB,
|
||||||
|
|
||||||
@@ -189,6 +200,57 @@ CREATE INDEX IF NOT EXISTS idx_adset_insights_account_time
|
|||||||
ON adset_insights (account_id, time DESC);
|
ON adset_insights (account_id, time DESC);
|
||||||
|
|
||||||
|
|
||||||
|
-- Campaign-level insights by country (time-series data)
|
||||||
|
CREATE TABLE IF NOT EXISTS campaign_insights_by_country (
|
||||||
|
time TIMESTAMPTZ NOT NULL,
|
||||||
|
campaign_id VARCHAR(50) NOT NULL REFERENCES campaigns(campaign_id),
|
||||||
|
account_id VARCHAR(50) NOT NULL REFERENCES ad_accounts(account_id),
|
||||||
|
country VARCHAR(2) NOT NULL, -- ISO 2-letter country code
|
||||||
|
|
||||||
|
-- Core metrics
|
||||||
|
impressions BIGINT,
|
||||||
|
clicks BIGINT,
|
||||||
|
spend NUMERIC(12, 2),
|
||||||
|
reach BIGINT,
|
||||||
|
frequency NUMERIC(10, 4),
|
||||||
|
|
||||||
|
-- Calculated metrics
|
||||||
|
ctr NUMERIC(10, 6),
|
||||||
|
cpc NUMERIC(10, 4),
|
||||||
|
cpm NUMERIC(10, 4),
|
||||||
|
cpp NUMERIC(10, 4), -- Cost per reach
|
||||||
|
|
||||||
|
-- Actions
|
||||||
|
actions JSONB,
|
||||||
|
cost_per_action_type JSONB,
|
||||||
|
|
||||||
|
-- Metadata
|
||||||
|
date_preset VARCHAR(50),
|
||||||
|
date_start DATE, -- Actual start date of the data range from Meta API
|
||||||
|
date_stop DATE, -- Actual end date of the data range from Meta API
|
||||||
|
fetched_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||||
|
|
||||||
|
PRIMARY KEY (time, campaign_id, country)
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Convert to hypertable
|
||||||
|
SELECT create_hypertable('campaign_insights_by_country', 'time',
|
||||||
|
if_not_exists => TRUE,
|
||||||
|
chunk_time_interval => INTERVAL '1 day'
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_campaign_insights_by_country_campaign_time
|
||||||
|
ON campaign_insights_by_country (campaign_id, time DESC);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_campaign_insights_by_country_account_time
|
||||||
|
ON campaign_insights_by_country (account_id, time DESC);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_campaign_insights_by_country_country
|
||||||
|
ON campaign_insights_by_country (country, time DESC);
|
||||||
|
|
||||||
|
|
||||||
|
-- Compression policy for campaign_insights_by_country
|
||||||
|
SELECT add_compression_policy('campaign_insights_by_country', INTERVAL '7 days', if_not_exists => TRUE);
|
||||||
|
|
||||||
|
|
||||||
-- ============================================================================
|
-- ============================================================================
|
||||||
-- CONTINUOUS AGGREGATES (Pre-computed rollups for dashboards)
|
-- CONTINUOUS AGGREGATES (Pre-computed rollups for dashboards)
|
||||||
-- ============================================================================
|
-- ============================================================================
|
||||||
|
|||||||
@@ -3,23 +3,35 @@ Rate limiting and backoff mechanism for Meta Marketing API.
|
|||||||
|
|
||||||
Based on Meta's best practices:
|
Based on Meta's best practices:
|
||||||
https://developers.facebook.com/docs/marketing-api/insights/best-practices/
|
https://developers.facebook.com/docs/marketing-api/insights/best-practices/
|
||||||
|
https://developers.facebook.com/docs/graph-api/overview/rate-limiting
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
import time
|
import time
|
||||||
from typing import Any, Callable, Dict, Optional
|
from typing import Any, Callable, Dict, List, Optional
|
||||||
|
|
||||||
from facebook_business.api import FacebookAdsApi
|
from facebook_business.api import FacebookAdsApi
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
logger.setLevel(logging.DEBUG)
|
||||||
|
|
||||||
|
|
||||||
class MetaRateLimiter:
|
class MetaRateLimiter:
|
||||||
"""
|
"""
|
||||||
Rate limiter with exponential backoff for Meta Marketing API.
|
Rate limiter with exponential backoff for Meta Marketing API.
|
||||||
|
|
||||||
Features:
|
Features:
|
||||||
- Monitors x-fb-ads-insights-throttle header
|
- Monitors X-App-Usage header (platform rate limits)
|
||||||
|
- Monitors X-Ad-Account-Usage header (ad account specific)
|
||||||
|
- Monitors X-Business-Use-Case-Usage header (business use case limits)
|
||||||
|
- Monitors x-fb-ads-insights-throttle header (legacy)
|
||||||
- Automatic throttling when usage > 75%
|
- Automatic throttling when usage > 75%
|
||||||
- Exponential backoff on rate limit errors
|
- Exponential backoff on rate limit errors
|
||||||
|
- Uses reset_time_duration and estimated_time_to_regain_access
|
||||||
- Configurable thresholds
|
- Configurable thresholds
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@@ -44,9 +56,24 @@ class MetaRateLimiter:
|
|||||||
self.max_retry_delay = max_retry_delay
|
self.max_retry_delay = max_retry_delay
|
||||||
self.max_retries = max_retries
|
self.max_retries = max_retries
|
||||||
|
|
||||||
# Track current usage percentages
|
# Track current usage percentages from different headers
|
||||||
self.app_usage_pct: float = 0.0
|
# X-App-Usage (platform rate limits)
|
||||||
self.account_usage_pct: float = 0.0
|
self.app_call_count: float = 0.0
|
||||||
|
self.app_total_cputime: float = 0.0
|
||||||
|
self.app_total_time: float = 0.0
|
||||||
|
|
||||||
|
# X-Ad-Account-Usage (ad account specific) - tracked per account
|
||||||
|
# Key: account_id (e.g., "act_123456789"), Value: dict with metrics
|
||||||
|
self.ad_account_usage: Dict[str, Dict[str, Any]] = {}
|
||||||
|
|
||||||
|
# X-Business-Use-Case-Usage (business use case limits)
|
||||||
|
self.buc_usage: List[Dict[str, Any]] = []
|
||||||
|
self.estimated_time_to_regain_access: int = 0 # minutes
|
||||||
|
|
||||||
|
# Legacy x-fb-ads-insights-throttle
|
||||||
|
self.legacy_app_usage_pct: float = 0.0
|
||||||
|
self.legacy_account_usage_pct: float = 0.0
|
||||||
|
|
||||||
self.last_check_time: float = time.time()
|
self.last_check_time: float = time.time()
|
||||||
|
|
||||||
# Stats
|
# Stats
|
||||||
@@ -54,9 +81,169 @@ class MetaRateLimiter:
|
|||||||
self.throttled_requests: int = 0
|
self.throttled_requests: int = 0
|
||||||
self.rate_limit_errors: int = 0
|
self.rate_limit_errors: int = 0
|
||||||
|
|
||||||
|
def _get_headers(self, response: Any) -> Optional[Dict[str, str]]:
|
||||||
|
"""
|
||||||
|
Extract headers from various response object types.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
response: API response object
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary of headers or None
|
||||||
|
"""
|
||||||
|
# Facebook SDK response object
|
||||||
|
if hasattr(response, '_headers'):
|
||||||
|
return response._headers
|
||||||
|
elif hasattr(response, 'headers'):
|
||||||
|
return response.headers
|
||||||
|
elif hasattr(response, '_api_response'):
|
||||||
|
return getattr(response._api_response, 'headers', None)
|
||||||
|
return None
|
||||||
|
|
||||||
|
def parse_x_app_usage(self, response: Any) -> Dict[str, float]:
|
||||||
|
"""
|
||||||
|
Parse X-App-Usage header (Platform rate limits).
|
||||||
|
|
||||||
|
Header format: {"call_count": 28, "total_time": 25, "total_cputime": 25}
|
||||||
|
|
||||||
|
Args:
|
||||||
|
response: API response object
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary with call_count, total_time, total_cputime
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
headers = self._get_headers(response)
|
||||||
|
if headers:
|
||||||
|
header_value = headers.get('x-app-usage') or headers.get('X-App-Usage', '')
|
||||||
|
if header_value:
|
||||||
|
logger.debug(f"X-App-Usage header: {header_value}")
|
||||||
|
data = json.loads(header_value)
|
||||||
|
result = {
|
||||||
|
'call_count': float(data.get('call_count', 0)),
|
||||||
|
'total_time': float(data.get('total_time', 0)),
|
||||||
|
'total_cputime': float(data.get('total_cputime', 0)),
|
||||||
|
}
|
||||||
|
logger.debug(f"Parsed X-App-Usage: {result}")
|
||||||
|
return result
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Failed to parse X-App-Usage header: {e}")
|
||||||
|
return {'call_count': 0.0, 'total_time': 0.0, 'total_cputime': 0.0}
|
||||||
|
|
||||||
|
def parse_x_ad_account_usage(self, response: Any) -> Optional[Dict[str, Any]]:
|
||||||
|
"""
|
||||||
|
Parse X-Ad-Account-Usage header (Ad account specific limits).
|
||||||
|
|
||||||
|
Header format: {
|
||||||
|
"acc_id_util_pct": 9.67,
|
||||||
|
"reset_time_duration": 100,
|
||||||
|
"ads_api_access_tier": "standard_access"
|
||||||
|
}
|
||||||
|
|
||||||
|
Args:
|
||||||
|
response: API response object
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary with metrics, or None if header not present.
|
||||||
|
To determine account_id, check response object or URL.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
headers = self._get_headers(response)
|
||||||
|
if headers:
|
||||||
|
header_value = headers.get('x-ad-account-usage') or headers.get('X-Ad-Account-Usage', '')
|
||||||
|
if header_value:
|
||||||
|
logger.debug(f"X-Ad-Account-Usage header: {header_value}")
|
||||||
|
data = json.loads(header_value)
|
||||||
|
result = {
|
||||||
|
'acc_id_util_pct': float(data.get('acc_id_util_pct', 0)),
|
||||||
|
'reset_time_duration': int(data.get('reset_time_duration', 0)),
|
||||||
|
'ads_api_access_tier': data.get('ads_api_access_tier'),
|
||||||
|
}
|
||||||
|
logger.debug(f"Parsed X-Ad-Account-Usage: {result}")
|
||||||
|
return result
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Failed to parse X-Ad-Account-Usage header: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _extract_account_id(self, response: Any) -> Optional[str]:
|
||||||
|
"""
|
||||||
|
Extract account ID from response object.
|
||||||
|
|
||||||
|
Tries multiple methods to find the account ID from the response.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
response: API response object
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Account ID string (e.g., "act_123456789") or None
|
||||||
|
"""
|
||||||
|
# Try to get account_id from response attributes
|
||||||
|
if hasattr(response, 'account_id'):
|
||||||
|
return response.account_id
|
||||||
|
if hasattr(response, '_data') and isinstance(response._data, dict):
|
||||||
|
return response._data.get('account_id')
|
||||||
|
|
||||||
|
# Try to get from parent object
|
||||||
|
if hasattr(response, '_parent_object'):
|
||||||
|
parent = response._parent_object
|
||||||
|
if hasattr(parent, 'get_id'):
|
||||||
|
return parent.get_id()
|
||||||
|
if hasattr(parent, '_data') and isinstance(parent._data, dict):
|
||||||
|
return parent._data.get('account_id') or parent._data.get('id')
|
||||||
|
|
||||||
|
# Try to get from API context
|
||||||
|
if hasattr(response, '_api_context'):
|
||||||
|
context = response._api_context
|
||||||
|
if hasattr(context, 'account_id'):
|
||||||
|
return context.account_id
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
def parse_x_business_use_case_usage(self, response: Any) -> List[Dict[str, Any]]:
|
||||||
|
"""
|
||||||
|
Parse X-Business-Use-Case-Usage header (Business use case limits).
|
||||||
|
|
||||||
|
Header format: {
|
||||||
|
"business-id": [{
|
||||||
|
"type": "ads_management",
|
||||||
|
"call_count": 95,
|
||||||
|
"total_cputime": 20,
|
||||||
|
"total_time": 20,
|
||||||
|
"estimated_time_to_regain_access": 0,
|
||||||
|
"ads_api_access_tier": "development_access"
|
||||||
|
}],
|
||||||
|
...
|
||||||
|
}
|
||||||
|
|
||||||
|
Args:
|
||||||
|
response: API response object
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of usage dictionaries for each business use case
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
headers = self._get_headers(response)
|
||||||
|
if headers:
|
||||||
|
header_value = headers.get('x-business-use-case-usage') or headers.get('X-Business-Use-Case-Usage', '')
|
||||||
|
if header_value:
|
||||||
|
logger.debug(f"X-Business-Use-Case-Usage header: {header_value}")
|
||||||
|
data = json.loads(header_value)
|
||||||
|
# Flatten the nested structure
|
||||||
|
all_usage = []
|
||||||
|
for business_id, use_cases in data.items():
|
||||||
|
if isinstance(use_cases, list):
|
||||||
|
for use_case in use_cases:
|
||||||
|
use_case['business_id'] = business_id
|
||||||
|
all_usage.append(use_case)
|
||||||
|
logger.debug(f"Parsed X-Business-Use-Case-Usage: {len(all_usage)} use cases")
|
||||||
|
return all_usage
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Failed to parse X-Business-Use-Case-Usage header: {e}")
|
||||||
|
return []
|
||||||
|
|
||||||
def parse_throttle_header(self, response: Any) -> Dict[str, float]:
|
def parse_throttle_header(self, response: Any) -> Dict[str, float]:
|
||||||
"""
|
"""
|
||||||
Parse x-fb-ads-insights-throttle header from response.
|
Parse x-fb-ads-insights-throttle header from response (legacy).
|
||||||
|
|
||||||
Header format: {"app_id_util_pct": 25.5, "acc_id_util_pct": 10.0}
|
Header format: {"app_id_util_pct": 25.5, "acc_id_util_pct": 10.0}
|
||||||
|
|
||||||
@@ -67,49 +254,149 @@ class MetaRateLimiter:
|
|||||||
Dictionary with app_id_util_pct and acc_id_util_pct
|
Dictionary with app_id_util_pct and acc_id_util_pct
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
# Try to get the header from different response types
|
headers = self._get_headers(response)
|
||||||
headers = None
|
|
||||||
|
|
||||||
# Facebook SDK response object
|
|
||||||
if hasattr(response, '_headers'):
|
|
||||||
headers = response._headers
|
|
||||||
elif hasattr(response, 'headers'):
|
|
||||||
headers = response.headers
|
|
||||||
elif hasattr(response, '_api_response'):
|
|
||||||
headers = getattr(response._api_response, 'headers', None)
|
|
||||||
|
|
||||||
if headers:
|
if headers:
|
||||||
throttle_header = headers.get('x-fb-ads-insights-throttle', '')
|
throttle_header = headers.get('x-fb-ads-insights-throttle', '')
|
||||||
if throttle_header:
|
if throttle_header:
|
||||||
import json
|
logger.debug(f"x-fb-ads-insights-throttle header: {throttle_header}")
|
||||||
throttle_data = json.loads(throttle_header)
|
throttle_data = json.loads(throttle_header)
|
||||||
return {
|
result = {
|
||||||
'app_id_util_pct': float(throttle_data.get('app_id_util_pct', 0)),
|
'app_id_util_pct': float(throttle_data.get('app_id_util_pct', 0)),
|
||||||
'acc_id_util_pct': float(throttle_data.get('acc_id_util_pct', 0)),
|
'acc_id_util_pct': float(throttle_data.get('acc_id_util_pct', 0)),
|
||||||
}
|
}
|
||||||
|
logger.debug(f"Parsed x-fb-ads-insights-throttle: {result}")
|
||||||
|
return result
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
# Silently fail - we'll use conservative defaults
|
logger.debug(f"Failed to parse x-fb-ads-insights-throttle header: {e}")
|
||||||
pass
|
|
||||||
|
|
||||||
return {'app_id_util_pct': 0.0, 'acc_id_util_pct': 0.0}
|
return {'app_id_util_pct': 0.0, 'acc_id_util_pct': 0.0}
|
||||||
|
|
||||||
def update_usage(self, response: Any):
|
def update_usage(self, response: Any, account_id: Optional[str] = None):
|
||||||
"""
|
"""
|
||||||
Update usage statistics from API response.
|
Update usage statistics from all API response headers.
|
||||||
|
|
||||||
|
Parses and updates metrics from:
|
||||||
|
- X-App-Usage
|
||||||
|
- X-Ad-Account-Usage (per account)
|
||||||
|
- X-Business-Use-Case-Usage
|
||||||
|
- x-fb-ads-insights-throttle (legacy)
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
response: API response object
|
response: API response object
|
||||||
|
account_id: Optional account ID (e.g., "act_123456789").
|
||||||
|
If not provided, will attempt to extract from response.
|
||||||
"""
|
"""
|
||||||
throttle_info = self.parse_throttle_header(response)
|
# Parse all headers
|
||||||
self.app_usage_pct = throttle_info['app_id_util_pct']
|
app_usage = self.parse_x_app_usage(response)
|
||||||
self.account_usage_pct = throttle_info['acc_id_util_pct']
|
ad_account_usage = self.parse_x_ad_account_usage(response)
|
||||||
|
buc_usage = self.parse_x_business_use_case_usage(response)
|
||||||
|
legacy_throttle = self.parse_throttle_header(response)
|
||||||
|
|
||||||
|
# Update X-App-Usage metrics
|
||||||
|
self.app_call_count = app_usage['call_count']
|
||||||
|
self.app_total_cputime = app_usage['total_cputime']
|
||||||
|
self.app_total_time = app_usage['total_time']
|
||||||
|
|
||||||
|
# Update X-Ad-Account-Usage metrics (per account)
|
||||||
|
if ad_account_usage:
|
||||||
|
# Try to get account_id
|
||||||
|
if not account_id:
|
||||||
|
account_id = self._extract_account_id(response)
|
||||||
|
|
||||||
|
# Use 'unknown' as fallback if we can't determine account
|
||||||
|
if not account_id:
|
||||||
|
account_id = 'unknown'
|
||||||
|
logger.debug("Could not determine account_id, using 'unknown'")
|
||||||
|
|
||||||
|
# Store usage for this account
|
||||||
|
self.ad_account_usage[account_id] = ad_account_usage
|
||||||
|
logger.debug(f"Updated ad account usage for {account_id}")
|
||||||
|
|
||||||
|
# Update X-Business-Use-Case-Usage metrics
|
||||||
|
self.buc_usage = buc_usage
|
||||||
|
# Find the maximum estimated_time_to_regain_access across all use cases
|
||||||
|
if buc_usage:
|
||||||
|
self.estimated_time_to_regain_access = max(
|
||||||
|
(uc.get('estimated_time_to_regain_access', 0) for uc in buc_usage),
|
||||||
|
default=0
|
||||||
|
)
|
||||||
|
|
||||||
|
# Update legacy metrics
|
||||||
|
self.legacy_app_usage_pct = legacy_throttle['app_id_util_pct']
|
||||||
|
self.legacy_account_usage_pct = legacy_throttle['acc_id_util_pct']
|
||||||
|
|
||||||
self.last_check_time = time.time()
|
self.last_check_time = time.time()
|
||||||
|
|
||||||
# Log if we're approaching limits
|
# Log warnings if approaching limits
|
||||||
max_usage = max(self.app_usage_pct, self.account_usage_pct)
|
self._log_rate_limit_warnings()
|
||||||
if max_usage > self.throttle_threshold:
|
|
||||||
print(f"\n⚠️ Rate limit warning: {max_usage:.1f}% usage")
|
def _log_rate_limit_warnings(self):
|
||||||
print(f" App: {self.app_usage_pct:.1f}%, Account: {self.account_usage_pct:.1f}%")
|
"""Log warnings if any rate limit metric is approaching threshold."""
|
||||||
|
warnings = []
|
||||||
|
|
||||||
|
# Check X-App-Usage metrics
|
||||||
|
if self.app_call_count > self.throttle_threshold:
|
||||||
|
warnings.append(f"App call count: {self.app_call_count:.1f}%")
|
||||||
|
if self.app_total_cputime > self.throttle_threshold:
|
||||||
|
warnings.append(f"App CPU time: {self.app_total_cputime:.1f}%")
|
||||||
|
if self.app_total_time > self.throttle_threshold:
|
||||||
|
warnings.append(f"App total time: {self.app_total_time:.1f}%")
|
||||||
|
|
||||||
|
# Check X-Ad-Account-Usage (per account)
|
||||||
|
for account_id, usage in self.ad_account_usage.items():
|
||||||
|
acc_pct = usage.get('acc_id_util_pct', 0)
|
||||||
|
if acc_pct > self.throttle_threshold:
|
||||||
|
warnings.append(f"Account {account_id}: {acc_pct:.1f}%")
|
||||||
|
reset_time = usage.get('reset_time_duration', 0)
|
||||||
|
if reset_time > 0:
|
||||||
|
warnings.append(f"Resets in {reset_time}s")
|
||||||
|
|
||||||
|
# Check X-Business-Use-Case-Usage
|
||||||
|
for buc in self.buc_usage:
|
||||||
|
buc_type = buc.get('type', 'unknown')
|
||||||
|
call_count = buc.get('call_count', 0)
|
||||||
|
if call_count > self.throttle_threshold:
|
||||||
|
warnings.append(f"BUC {buc_type}: {call_count:.1f}%")
|
||||||
|
eta = buc.get('estimated_time_to_regain_access', 0)
|
||||||
|
if eta > 0:
|
||||||
|
warnings.append(f"Regain access in {eta} min")
|
||||||
|
|
||||||
|
# Check legacy metrics
|
||||||
|
if self.legacy_app_usage_pct > self.throttle_threshold:
|
||||||
|
warnings.append(f"Legacy app: {self.legacy_app_usage_pct:.1f}%")
|
||||||
|
if self.legacy_account_usage_pct > self.throttle_threshold:
|
||||||
|
warnings.append(f"Legacy account: {self.legacy_account_usage_pct:.1f}%")
|
||||||
|
|
||||||
|
if warnings:
|
||||||
|
logger.warning(f"⚠️ Rate limit warning: {', '.join(warnings)}")
|
||||||
|
|
||||||
|
def get_max_usage_pct(self) -> float:
|
||||||
|
"""
|
||||||
|
Get the maximum usage percentage across all rate limit metrics.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Maximum usage percentage (0-100)
|
||||||
|
"""
|
||||||
|
usage_values = [
|
||||||
|
self.app_call_count,
|
||||||
|
self.app_total_cputime,
|
||||||
|
self.app_total_time,
|
||||||
|
self.legacy_app_usage_pct,
|
||||||
|
self.legacy_account_usage_pct,
|
||||||
|
]
|
||||||
|
|
||||||
|
# Add ad account usage percentages (per account)
|
||||||
|
for usage in self.ad_account_usage.values():
|
||||||
|
usage_values.append(usage.get('acc_id_util_pct', 0))
|
||||||
|
|
||||||
|
# Add BUC usage percentages
|
||||||
|
for buc in self.buc_usage:
|
||||||
|
usage_values.extend([
|
||||||
|
buc.get('call_count', 0),
|
||||||
|
buc.get('total_cputime', 0),
|
||||||
|
buc.get('total_time', 0),
|
||||||
|
])
|
||||||
|
|
||||||
|
return max(usage_values) if usage_values else 0.0
|
||||||
|
|
||||||
def should_throttle(self) -> bool:
|
def should_throttle(self) -> bool:
|
||||||
"""
|
"""
|
||||||
@@ -118,21 +405,39 @@ class MetaRateLimiter:
|
|||||||
Returns:
|
Returns:
|
||||||
True if usage exceeds threshold
|
True if usage exceeds threshold
|
||||||
"""
|
"""
|
||||||
max_usage = max(self.app_usage_pct, self.account_usage_pct)
|
return self.get_max_usage_pct() > self.throttle_threshold
|
||||||
return max_usage > self.throttle_threshold
|
|
||||||
|
|
||||||
def get_throttle_delay(self) -> float:
|
def get_throttle_delay(self) -> float:
|
||||||
"""
|
"""
|
||||||
Calculate delay based on current usage.
|
Calculate delay based on current usage and reset times.
|
||||||
|
|
||||||
|
Uses estimated_time_to_regain_access and reset_time_duration when available.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Delay in seconds
|
Delay in seconds
|
||||||
"""
|
"""
|
||||||
max_usage = max(self.app_usage_pct, self.account_usage_pct)
|
max_usage = self.get_max_usage_pct()
|
||||||
|
|
||||||
if max_usage < self.throttle_threshold:
|
if max_usage < self.throttle_threshold:
|
||||||
return self.base_delay
|
return self.base_delay
|
||||||
|
|
||||||
|
# If we have estimated_time_to_regain_access from BUC header, use it
|
||||||
|
if self.estimated_time_to_regain_access > 0:
|
||||||
|
# Convert minutes to seconds and use as delay
|
||||||
|
delay = self.estimated_time_to_regain_access * 60
|
||||||
|
logger.info(f"Using BUC estimated_time_to_regain_access: {self.estimated_time_to_regain_access} min ({delay}s)")
|
||||||
|
return min(delay, self.max_retry_delay)
|
||||||
|
|
||||||
|
# Check if any ad account has reset_time_duration and high usage
|
||||||
|
for account_id, usage in self.ad_account_usage.items():
|
||||||
|
acc_pct = usage.get('acc_id_util_pct', 0)
|
||||||
|
reset_time = usage.get('reset_time_duration', 0)
|
||||||
|
if reset_time > 0 and acc_pct >= 90:
|
||||||
|
# Use a fraction of reset_time_duration as delay
|
||||||
|
delay = min(reset_time * 0.5, self.max_retry_delay)
|
||||||
|
logger.info(f"Using Ad Account {account_id} reset_time_duration: {reset_time}s (delay: {delay}s)")
|
||||||
|
return delay
|
||||||
|
|
||||||
# Progressive delay based on usage
|
# Progressive delay based on usage
|
||||||
# 75% = base_delay, 90% = 2x, 95% = 5x, 99% = 10x
|
# 75% = base_delay, 90% = 2x, 95% = 5x, 99% = 10x
|
||||||
if max_usage >= 95:
|
if max_usage >= 95:
|
||||||
@@ -166,7 +471,8 @@ class MetaRateLimiter:
|
|||||||
|
|
||||||
if delay > self.base_delay:
|
if delay > self.base_delay:
|
||||||
self.throttled_requests += 1
|
self.throttled_requests += 1
|
||||||
print(f"⏸️ Throttling for {delay:.1f}s (usage: {max(self.app_usage_pct, self.account_usage_pct):.1f}%)")
|
max_usage = self.get_max_usage_pct()
|
||||||
|
logger.info(f"⏸️ Throttling for {delay:.1f}s (max usage: {max_usage:.1f}%)")
|
||||||
|
|
||||||
await asyncio.sleep(delay)
|
await asyncio.sleep(delay)
|
||||||
|
|
||||||
@@ -209,13 +515,25 @@ class MetaRateLimiter:
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
error_message = str(e).lower()
|
error_message = str(e).lower()
|
||||||
|
|
||||||
# Check if it's a rate limit error
|
# Check if it's a rate limit error (expanded list based on Meta docs)
|
||||||
is_rate_limit = (
|
is_rate_limit = (
|
||||||
'rate limit' in error_message or
|
'rate limit' in error_message or
|
||||||
'too many requests' in error_message or
|
'too many requests' in error_message or
|
||||||
'throttle' in error_message or
|
'throttle' in error_message or
|
||||||
'error code 17' in error_message or # Meta's rate limit error code
|
'error code 4' in error_message or # App rate limit
|
||||||
'error code 80004' in error_message # Insights rate limit
|
'error code 17' in error_message or # User rate limit
|
||||||
|
'error code 32' in error_message or # Pages rate limit
|
||||||
|
'error code 613' in error_message or # Custom rate limit
|
||||||
|
'error code 80000' in error_message or # Ads Insights BUC
|
||||||
|
'error code 80001' in error_message or # Pages BUC
|
||||||
|
'error code 80002' in error_message or # Instagram BUC
|
||||||
|
'error code 80003' in error_message or # Custom Audience BUC
|
||||||
|
'error code 80004' in error_message or # Ads Management BUC
|
||||||
|
'error code 80005' in error_message or # LeadGen BUC
|
||||||
|
'error code 80006' in error_message or # Messenger BUC
|
||||||
|
'error code 80008' in error_message or # WhatsApp BUC
|
||||||
|
'error code 80009' in error_message or # Catalog Management BUC
|
||||||
|
'error code 80014' in error_message # Catalog Batch BUC
|
||||||
)
|
)
|
||||||
|
|
||||||
if is_rate_limit:
|
if is_rate_limit:
|
||||||
@@ -226,12 +544,12 @@ class MetaRateLimiter:
|
|||||||
(2 ** (retry + 1)) * self.base_delay,
|
(2 ** (retry + 1)) * self.base_delay,
|
||||||
self.max_retry_delay
|
self.max_retry_delay
|
||||||
)
|
)
|
||||||
print(f"\n🔄 Rate limit hit! Retrying in {backoff_delay:.1f}s (attempt {retry + 1}/{self.max_retries})")
|
logger.warning(f"🔄 Rate limit hit! Retrying in {backoff_delay:.1f}s (attempt {retry + 1}/{self.max_retries})")
|
||||||
print(f" Error: {e}")
|
logger.warning(f" Error: {e}")
|
||||||
await asyncio.sleep(backoff_delay)
|
await asyncio.sleep(backoff_delay)
|
||||||
continue
|
continue
|
||||||
else:
|
else:
|
||||||
print(f"\n❌ Rate limit error - max retries exhausted")
|
logger.error(f"❌ Rate limit error - max retries exhausted: {e}")
|
||||||
raise
|
raise
|
||||||
|
|
||||||
# Not a rate limit error, re-raise immediately
|
# Not a rate limit error, re-raise immediately
|
||||||
@@ -245,29 +563,95 @@ class MetaRateLimiter:
|
|||||||
Get current rate limiter statistics.
|
Get current rate limiter statistics.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Dictionary with stats
|
Dictionary with comprehensive stats from all headers
|
||||||
"""
|
"""
|
||||||
return {
|
return {
|
||||||
|
# Request stats
|
||||||
'total_requests': self.total_requests,
|
'total_requests': self.total_requests,
|
||||||
'throttled_requests': self.throttled_requests,
|
'throttled_requests': self.throttled_requests,
|
||||||
'rate_limit_errors': self.rate_limit_errors,
|
'rate_limit_errors': self.rate_limit_errors,
|
||||||
'app_usage_pct': self.app_usage_pct,
|
|
||||||
'account_usage_pct': self.account_usage_pct,
|
# X-App-Usage metrics
|
||||||
'max_usage_pct': max(self.app_usage_pct, self.account_usage_pct),
|
'app_call_count': self.app_call_count,
|
||||||
|
'app_total_cputime': self.app_total_cputime,
|
||||||
|
'app_total_time': self.app_total_time,
|
||||||
|
|
||||||
|
# X-Ad-Account-Usage metrics (per account)
|
||||||
|
'ad_account_usage': self.ad_account_usage,
|
||||||
|
|
||||||
|
# X-Business-Use-Case-Usage metrics
|
||||||
|
'buc_usage': self.buc_usage,
|
||||||
|
'estimated_time_to_regain_access': self.estimated_time_to_regain_access,
|
||||||
|
|
||||||
|
# Legacy metrics
|
||||||
|
'legacy_app_usage_pct': self.legacy_app_usage_pct,
|
||||||
|
'legacy_account_usage_pct': self.legacy_account_usage_pct,
|
||||||
|
|
||||||
|
# Computed metrics
|
||||||
|
'max_usage_pct': self.get_max_usage_pct(),
|
||||||
'is_throttling': self.should_throttle(),
|
'is_throttling': self.should_throttle(),
|
||||||
}
|
}
|
||||||
|
|
||||||
def print_stats(self):
|
def print_stats(self):
|
||||||
"""Print current statistics."""
|
"""Print current statistics with all rate limit metrics."""
|
||||||
stats = self.get_stats()
|
stats = self.get_stats()
|
||||||
print("\n" + "="*60)
|
|
||||||
print("RATE LIMITER STATISTICS")
|
output = []
|
||||||
print("="*60)
|
output.append("\n" + "="*70)
|
||||||
print(f"Total Requests: {stats['total_requests']}")
|
output.append("RATE LIMITER STATISTICS")
|
||||||
print(f"Throttled Requests: {stats['throttled_requests']}")
|
output.append("="*70)
|
||||||
print(f"Rate Limit Errors: {stats['rate_limit_errors']}")
|
|
||||||
print(f"App Usage: {stats['app_usage_pct']:.1f}%")
|
# Request stats
|
||||||
print(f"Account Usage: {stats['account_usage_pct']:.1f}%")
|
output.append(f"Total Requests: {stats['total_requests']}")
|
||||||
print(f"Max Usage: {stats['max_usage_pct']:.1f}%")
|
output.append(f"Throttled Requests: {stats['throttled_requests']}")
|
||||||
print(f"Currently Throttled: {stats['is_throttling']}")
|
output.append(f"Rate Limit Errors: {stats['rate_limit_errors']}")
|
||||||
print("="*60 + "\n")
|
output.append("")
|
||||||
|
|
||||||
|
# X-App-Usage
|
||||||
|
output.append("X-App-Usage (Platform Rate Limits):")
|
||||||
|
output.append(f" Call Count: {stats['app_call_count']:.1f}%")
|
||||||
|
output.append(f" Total CPU Time: {stats['app_total_cputime']:.1f}%")
|
||||||
|
output.append(f" Total Time: {stats['app_total_time']:.1f}%")
|
||||||
|
output.append("")
|
||||||
|
|
||||||
|
# X-Ad-Account-Usage (per account)
|
||||||
|
if stats['ad_account_usage']:
|
||||||
|
# Only show accounts with data (skip "unknown" accounts with 0 usage)
|
||||||
|
accounts_to_show = {
|
||||||
|
account_id: usage
|
||||||
|
for account_id, usage in stats['ad_account_usage'].items()
|
||||||
|
if account_id != 'unknown' or usage.get('acc_id_util_pct', 0) > 0
|
||||||
|
}
|
||||||
|
|
||||||
|
if accounts_to_show:
|
||||||
|
output.append("X-Ad-Account-Usage (Per Account):")
|
||||||
|
for account_id, usage in accounts_to_show.items():
|
||||||
|
output.append(f" Account: {account_id}")
|
||||||
|
output.append(f" Usage: {usage.get('acc_id_util_pct', 0):.1f}%")
|
||||||
|
output.append(f" Reset Time: {usage.get('reset_time_duration', 0)}s")
|
||||||
|
output.append(f" API Access Tier: {usage.get('ads_api_access_tier') or 'N/A'}")
|
||||||
|
output.append("")
|
||||||
|
|
||||||
|
# X-Business-Use-Case-Usage
|
||||||
|
if stats['buc_usage']:
|
||||||
|
output.append("X-Business-Use-Case-Usage:")
|
||||||
|
for buc in stats['buc_usage']:
|
||||||
|
output.append(f" Type: {buc.get('type', 'unknown')}")
|
||||||
|
output.append(f" Call Count: {buc.get('call_count', 0):.1f}%")
|
||||||
|
output.append(f" Total CPU Time: {buc.get('total_cputime', 0):.1f}%")
|
||||||
|
output.append(f" Total Time: {buc.get('total_time', 0):.1f}%")
|
||||||
|
output.append(f" Est. Time to Regain: {buc.get('estimated_time_to_regain_access', 0)} min")
|
||||||
|
output.append("")
|
||||||
|
|
||||||
|
# Legacy metrics
|
||||||
|
output.append("Legacy (x-fb-ads-insights-throttle):")
|
||||||
|
output.append(f" App Usage: {stats['legacy_app_usage_pct']:.1f}%")
|
||||||
|
output.append(f" Account Usage: {stats['legacy_account_usage_pct']:.1f}%")
|
||||||
|
output.append("")
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
output.append(f"Max Usage Across All Metrics: {stats['max_usage_pct']:.1f}%")
|
||||||
|
output.append(f"Currently Throttled: {stats['is_throttling']}")
|
||||||
|
output.append("="*70 + "\n")
|
||||||
|
|
||||||
|
logger.info("\n".join(output))
|
||||||
|
|||||||
@@ -4,6 +4,7 @@ Runs periodically to build time-series data for dashboards.
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
|
import logging
|
||||||
import os
|
import os
|
||||||
from datetime import datetime, timedelta, timezone, date
|
from datetime import datetime, timedelta, timezone, date
|
||||||
from typing import Optional, Dict
|
from typing import Optional, Dict
|
||||||
@@ -19,6 +20,27 @@ from facebook_business.exceptions import FacebookRequestError
|
|||||||
from .database import TimescaleDBClient
|
from .database import TimescaleDBClient
|
||||||
from .rate_limiter import MetaRateLimiter
|
from .rate_limiter import MetaRateLimiter
|
||||||
from .token_manager import MetaTokenManager
|
from .token_manager import MetaTokenManager
|
||||||
|
from .view_manager import ViewManager
|
||||||
|
|
||||||
|
# Set up logger
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
common_fields = [
|
||||||
|
AdsInsights.Field.impressions,
|
||||||
|
AdsInsights.Field.clicks,
|
||||||
|
AdsInsights.Field.spend,
|
||||||
|
AdsInsights.Field.cpc,
|
||||||
|
AdsInsights.Field.cpm,
|
||||||
|
AdsInsights.Field.ctr,
|
||||||
|
AdsInsights.Field.cpp,
|
||||||
|
AdsInsights.Field.reach,
|
||||||
|
AdsInsights.Field.frequency,
|
||||||
|
AdsInsights.Field.actions,
|
||||||
|
AdsInsights.Field.cost_per_action_type,
|
||||||
|
AdsInsights.Field.date_start,
|
||||||
|
AdsInsights.Field.date_stop,
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
class ScheduledInsightsGrabber:
|
class ScheduledInsightsGrabber:
|
||||||
@@ -80,6 +102,9 @@ class ScheduledInsightsGrabber:
|
|||||||
# Database client
|
# Database client
|
||||||
self.db: Optional[TimescaleDBClient] = None
|
self.db: Optional[TimescaleDBClient] = None
|
||||||
|
|
||||||
|
# View manager for materialized views
|
||||||
|
self.view_manager: Optional[ViewManager] = None
|
||||||
|
|
||||||
# Rate limiter with backoff (Meta best practices)
|
# Rate limiter with backoff (Meta best practices)
|
||||||
self.rate_limiter = MetaRateLimiter(
|
self.rate_limiter = MetaRateLimiter(
|
||||||
base_delay=2.0, # 2 seconds base delay
|
base_delay=2.0, # 2 seconds base delay
|
||||||
@@ -340,11 +365,19 @@ class ScheduledInsightsGrabber:
|
|||||||
for campaign in campaigns:
|
for campaign in campaigns:
|
||||||
campaign_id = campaign['id']
|
campaign_id = campaign['id']
|
||||||
campaign_name = campaign.get('name')
|
campaign_name = campaign.get('name')
|
||||||
|
campaign_dict = dict(campaign)
|
||||||
|
|
||||||
|
# DEBUG: Log all campaign data before upsert
|
||||||
|
logger.debug(
|
||||||
|
f"Campaign metadata before upsert: id={campaign_id}, "
|
||||||
|
f"name={campaign_name!r}, status={campaign.get('status')}, "
|
||||||
|
f"objective={campaign.get('objective')}, raw={campaign_dict}"
|
||||||
|
)
|
||||||
|
|
||||||
# Track campaigns without names for debugging
|
# Track campaigns without names for debugging
|
||||||
if not campaign_name:
|
if not campaign_name:
|
||||||
campaigns_without_name.append(campaign_id)
|
campaigns_without_name.append(campaign_id)
|
||||||
print(f" WARNING: Campaign {campaign_id} has no name. Raw data: {dict(campaign)}")
|
logger.warning(f"Campaign {campaign_id} has no name. Raw data: {campaign_dict}")
|
||||||
|
|
||||||
await self.db.upsert_campaign(
|
await self.db.upsert_campaign(
|
||||||
campaign_id=campaign_id,
|
campaign_id=campaign_id,
|
||||||
@@ -358,6 +391,7 @@ class ScheduledInsightsGrabber:
|
|||||||
print(f" {count} campaigns cached for {account_id}")
|
print(f" {count} campaigns cached for {account_id}")
|
||||||
if campaigns_without_name:
|
if campaigns_without_name:
|
||||||
print(f" ⚠️ {len(campaigns_without_name)} campaigns without names: {campaigns_without_name}")
|
print(f" ⚠️ {len(campaigns_without_name)} campaigns without names: {campaigns_without_name}")
|
||||||
|
logger.warning(f"{len(campaigns_without_name)} campaigns without names: {campaigns_without_name}")
|
||||||
|
|
||||||
async def cache_adsets_metadata(self, account_id: str, limit: int = 100):
|
async def cache_adsets_metadata(self, account_id: str, limit: int = 100):
|
||||||
"""
|
"""
|
||||||
@@ -393,6 +427,131 @@ class ScheduledInsightsGrabber:
|
|||||||
|
|
||||||
print(f" {count} ad sets cached for {account_id}")
|
print(f" {count} ad sets cached for {account_id}")
|
||||||
|
|
||||||
|
async def _master_grab_insights(
|
||||||
|
self,
|
||||||
|
account_id: str,
|
||||||
|
fields: list,
|
||||||
|
level: str,
|
||||||
|
db_insert_func,
|
||||||
|
date_preset: Optional[str] = None,
|
||||||
|
start_date: Optional[date] = None,
|
||||||
|
end_date: Optional[date] = None,
|
||||||
|
breakdowns: Optional[list] = None,
|
||||||
|
limit: Optional[int] = None,
|
||||||
|
required_fields: Optional[dict] = None,
|
||||||
|
extra_data_processor=None,
|
||||||
|
) -> tuple[int, Optional[date]]:
|
||||||
|
"""
|
||||||
|
Master method to grab and store insights at any level.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
account_id: Ad account ID
|
||||||
|
fields: List of AdsInsights fields to retrieve
|
||||||
|
level: Insights level ("account", "campaign", "adset", etc.)
|
||||||
|
db_insert_func: Database insert function to call for each insight
|
||||||
|
date_preset: Meta date preset (e.g., "today", "yesterday"). Use either this or start_date/end_date
|
||||||
|
start_date: Start date for custom date range (optional)
|
||||||
|
end_date: End date for custom date range (optional)
|
||||||
|
breakdowns: List of breakdown fields (optional)
|
||||||
|
limit: Maximum number of results (optional)
|
||||||
|
required_fields: Dict of field_name -> label for validation before insert
|
||||||
|
extra_data_processor: Optional callable to process/add extra data to insight_dict
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (count of records stored, date_start from insights)
|
||||||
|
"""
|
||||||
|
# Build params
|
||||||
|
params = {"level": level}
|
||||||
|
|
||||||
|
if date_preset:
|
||||||
|
params["date_preset"] = date_preset
|
||||||
|
date_preset_for_db = date_preset
|
||||||
|
else:
|
||||||
|
# Use time_range for custom date ranges
|
||||||
|
params["time_range"] = {
|
||||||
|
"since": start_date.isoformat(),
|
||||||
|
"until": end_date.isoformat(),
|
||||||
|
}
|
||||||
|
params["time_increment"] = 1 # Daily breakdown
|
||||||
|
date_preset_for_db = "custom"
|
||||||
|
|
||||||
|
if breakdowns:
|
||||||
|
params["breakdowns"] = breakdowns
|
||||||
|
|
||||||
|
if limit:
|
||||||
|
params["limit"] = limit
|
||||||
|
|
||||||
|
# Fetch insights from Meta API
|
||||||
|
ad_account = AdAccount(account_id)
|
||||||
|
try:
|
||||||
|
insights = await self._rate_limited_request(
|
||||||
|
ad_account.get_insights,
|
||||||
|
fields=fields,
|
||||||
|
params=params,
|
||||||
|
)
|
||||||
|
except FacebookRequestError as e:
|
||||||
|
error_code = e.api_error_code()
|
||||||
|
if error_code in [190, 102]: # Invalid OAuth token errors
|
||||||
|
raise ValueError(f"Access token is invalid (error {error_code}): {e.api_error_message()}")
|
||||||
|
raise
|
||||||
|
|
||||||
|
# Get account timezone from database
|
||||||
|
account_timezone = await self._get_account_timezone(account_id)
|
||||||
|
|
||||||
|
# Store insights
|
||||||
|
count = 0
|
||||||
|
date_start_value = None
|
||||||
|
|
||||||
|
for insight in insights:
|
||||||
|
insight_dict = dict(insight)
|
||||||
|
|
||||||
|
# Extract date_start if available (for return value)
|
||||||
|
date_start_str = insight_dict.get("date_start")
|
||||||
|
if date_start_str and date_start_value is None:
|
||||||
|
date_start_value = date.fromisoformat(date_start_str)
|
||||||
|
|
||||||
|
# Check required fields before processing
|
||||||
|
if required_fields:
|
||||||
|
skip = False
|
||||||
|
for field_name, field_label in required_fields.items():
|
||||||
|
if not insight_dict.get(field_name):
|
||||||
|
skip = True
|
||||||
|
break
|
||||||
|
if skip:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Call extra processor if provided
|
||||||
|
if extra_data_processor:
|
||||||
|
extra_data_processor(insight_dict)
|
||||||
|
|
||||||
|
# Compute appropriate timestamp based on date_start and account timezone
|
||||||
|
timestamp = self._compute_timestamp(date_start_str, account_timezone)
|
||||||
|
|
||||||
|
# Build kwargs for insert function based on level
|
||||||
|
kwargs = {
|
||||||
|
"time": timestamp,
|
||||||
|
"account_id": account_id,
|
||||||
|
"data": insight_dict,
|
||||||
|
"date_preset": date_preset_for_db,
|
||||||
|
}
|
||||||
|
|
||||||
|
# Add level-specific parameters
|
||||||
|
if level == "campaign":
|
||||||
|
kwargs["campaign_id"] = insight_dict.get("campaign_id")
|
||||||
|
elif level == "adset":
|
||||||
|
kwargs["adset_id"] = insight_dict.get("adset_id")
|
||||||
|
kwargs["campaign_id"] = insight_dict.get("campaign_id")
|
||||||
|
|
||||||
|
# Add country for breakdown queries
|
||||||
|
if "country" in insight_dict:
|
||||||
|
kwargs["country"] = insight_dict.get("country")
|
||||||
|
|
||||||
|
# Call the appropriate database insert function with level-specific parameters
|
||||||
|
await db_insert_func(**kwargs)
|
||||||
|
count += 1
|
||||||
|
|
||||||
|
return count, date_start_value
|
||||||
|
|
||||||
async def grab_account_insights(self, account_id: str, date_preset: str = "today") -> Optional[date]:
|
async def grab_account_insights(self, account_id: str, date_preset: str = "today") -> Optional[date]:
|
||||||
"""
|
"""
|
||||||
Grab and store account-level insights.
|
Grab and store account-level insights.
|
||||||
@@ -420,50 +579,13 @@ class ScheduledInsightsGrabber:
|
|||||||
AdsInsights.Field.date_stop,
|
AdsInsights.Field.date_stop,
|
||||||
]
|
]
|
||||||
|
|
||||||
params = {
|
count, date_start_value = await self._master_grab_insights(
|
||||||
"date_preset": date_preset,
|
|
||||||
"level": "account",
|
|
||||||
}
|
|
||||||
|
|
||||||
ad_account = AdAccount(account_id)
|
|
||||||
try:
|
|
||||||
insights = await self._rate_limited_request(
|
|
||||||
ad_account.get_insights,
|
|
||||||
fields=fields,
|
|
||||||
params=params,
|
|
||||||
)
|
|
||||||
except FacebookRequestError as e:
|
|
||||||
# Check if it's a token error
|
|
||||||
error_code = e.api_error_code()
|
|
||||||
if error_code in [190, 102]: # Invalid OAuth token errors
|
|
||||||
raise ValueError(f"Access token is invalid (error {error_code}): {e.api_error_message()}")
|
|
||||||
raise
|
|
||||||
|
|
||||||
# Get account timezone from database
|
|
||||||
account_timezone = await self._get_account_timezone(account_id)
|
|
||||||
|
|
||||||
# Store insights
|
|
||||||
count = 0
|
|
||||||
date_start_value = None
|
|
||||||
|
|
||||||
for insight in insights:
|
|
||||||
insight_dict = dict(insight)
|
|
||||||
|
|
||||||
# Extract date_start if available
|
|
||||||
date_start_str = insight_dict.get("date_start")
|
|
||||||
if date_start_str and date_start_value is None:
|
|
||||||
date_start_value = date.fromisoformat(date_start_str)
|
|
||||||
|
|
||||||
# Compute appropriate timestamp based on date_start and account timezone
|
|
||||||
timestamp = self._compute_timestamp(date_start_str, account_timezone)
|
|
||||||
|
|
||||||
await self.db.insert_account_insights(
|
|
||||||
time=timestamp,
|
|
||||||
account_id=account_id,
|
account_id=account_id,
|
||||||
data=insight_dict,
|
fields=fields,
|
||||||
|
level="account",
|
||||||
|
db_insert_func=self.db.insert_account_insights,
|
||||||
date_preset=date_preset,
|
date_preset=date_preset,
|
||||||
)
|
)
|
||||||
count += 1
|
|
||||||
|
|
||||||
print(f" Account insights stored for {account_id} ({count} records, date: {date_start_value})")
|
print(f" Account insights stored for {account_id} ({count} records, date: {date_start_value})")
|
||||||
return date_start_value
|
return date_start_value
|
||||||
@@ -477,64 +599,22 @@ class ScheduledInsightsGrabber:
|
|||||||
date_preset: Meta date preset
|
date_preset: Meta date preset
|
||||||
limit: Maximum number of campaigns
|
limit: Maximum number of campaigns
|
||||||
"""
|
"""
|
||||||
fields = [
|
|
||||||
|
fields = common_fields + [
|
||||||
AdsInsights.Field.campaign_id,
|
AdsInsights.Field.campaign_id,
|
||||||
AdsInsights.Field.campaign_name,
|
AdsInsights.Field.campaign_name,
|
||||||
AdsInsights.Field.impressions,
|
|
||||||
AdsInsights.Field.clicks,
|
|
||||||
AdsInsights.Field.spend,
|
|
||||||
AdsInsights.Field.ctr,
|
|
||||||
AdsInsights.Field.cpc,
|
|
||||||
AdsInsights.Field.cpm,
|
|
||||||
AdsInsights.Field.reach,
|
|
||||||
AdsInsights.Field.actions,
|
|
||||||
AdsInsights.Field.date_start,
|
|
||||||
AdsInsights.Field.date_stop,
|
|
||||||
]
|
]
|
||||||
|
|
||||||
params = {
|
|
||||||
"date_preset": date_preset,
|
|
||||||
"level": "campaign",
|
|
||||||
"limit": limit,
|
|
||||||
}
|
|
||||||
|
|
||||||
ad_account = AdAccount(account_id)
|
count, _ = await self._master_grab_insights(
|
||||||
try:
|
|
||||||
insights = await self._rate_limited_request(
|
|
||||||
ad_account.get_insights,
|
|
||||||
fields=fields,
|
|
||||||
params=params,
|
|
||||||
)
|
|
||||||
except FacebookRequestError as e:
|
|
||||||
error_code = e.api_error_code()
|
|
||||||
if error_code in [190, 102]:
|
|
||||||
raise ValueError(f"Access token is invalid (error {error_code}): {e.api_error_message()}")
|
|
||||||
raise
|
|
||||||
|
|
||||||
# Get account timezone from database
|
|
||||||
account_timezone = await self._get_account_timezone(account_id)
|
|
||||||
|
|
||||||
# Store insights (metadata is automatically cached from insights data)
|
|
||||||
count = 0
|
|
||||||
for insight in insights:
|
|
||||||
campaign_id = insight.get('campaign_id')
|
|
||||||
if campaign_id:
|
|
||||||
insight_dict = dict(insight)
|
|
||||||
|
|
||||||
# Compute appropriate timestamp based on date_preset and account timezone
|
|
||||||
date_start_str = insight_dict.get("date_start")
|
|
||||||
timestamp = self._compute_timestamp(date_start_str, account_timezone)
|
|
||||||
|
|
||||||
# Insert insights - metadata is automatically cached from the insights data
|
|
||||||
await self.db.insert_campaign_insights(
|
|
||||||
time=timestamp,
|
|
||||||
campaign_id=campaign_id,
|
|
||||||
account_id=account_id,
|
account_id=account_id,
|
||||||
data=insight_dict,
|
fields=fields,
|
||||||
|
level="campaign",
|
||||||
|
db_insert_func=self.db.insert_campaign_insights,
|
||||||
date_preset=date_preset,
|
date_preset=date_preset,
|
||||||
cache_metadata=True, # Automatically cache campaign name from insights
|
limit=limit,
|
||||||
|
required_fields={"campaign_id": "campaign_id"},
|
||||||
)
|
)
|
||||||
count += 1
|
|
||||||
|
|
||||||
print(f" Campaign insights stored for {account_id} ({count} records)")
|
print(f" Campaign insights stored for {account_id} ({count} records)")
|
||||||
|
|
||||||
@@ -563,54 +643,45 @@ class ScheduledInsightsGrabber:
|
|||||||
AdsInsights.Field.date_stop,
|
AdsInsights.Field.date_stop,
|
||||||
]
|
]
|
||||||
|
|
||||||
params = {
|
count, _ = await self._master_grab_insights(
|
||||||
"date_preset": date_preset,
|
|
||||||
"level": "adset",
|
|
||||||
"limit": limit,
|
|
||||||
}
|
|
||||||
|
|
||||||
ad_account = AdAccount(account_id)
|
|
||||||
try:
|
|
||||||
insights = await self._rate_limited_request(
|
|
||||||
ad_account.get_insights,
|
|
||||||
fields=fields,
|
|
||||||
params=params,
|
|
||||||
)
|
|
||||||
except FacebookRequestError as e:
|
|
||||||
error_code = e.api_error_code()
|
|
||||||
if error_code in [190, 102]:
|
|
||||||
raise ValueError(f"Access token is invalid (error {error_code}): {e.api_error_message()}")
|
|
||||||
raise
|
|
||||||
|
|
||||||
# Get account timezone from database
|
|
||||||
account_timezone = await self._get_account_timezone(account_id)
|
|
||||||
|
|
||||||
# Store insights (metadata is automatically cached from insights data)
|
|
||||||
count = 0
|
|
||||||
for insight in insights:
|
|
||||||
adset_id = insight.get('adset_id')
|
|
||||||
campaign_id = insight.get('campaign_id')
|
|
||||||
if adset_id and campaign_id:
|
|
||||||
insight_dict = dict(insight)
|
|
||||||
|
|
||||||
# Compute appropriate timestamp based on date_preset and account timezone
|
|
||||||
date_start_str = insight_dict.get("date_start")
|
|
||||||
timestamp = self._compute_timestamp(date_start_str, account_timezone)
|
|
||||||
|
|
||||||
# Insert insights - metadata is automatically cached from the insights data
|
|
||||||
await self.db.insert_adset_insights(
|
|
||||||
time=timestamp,
|
|
||||||
adset_id=adset_id,
|
|
||||||
campaign_id=campaign_id,
|
|
||||||
account_id=account_id,
|
account_id=account_id,
|
||||||
data=insight_dict,
|
fields=fields,
|
||||||
|
level="adset",
|
||||||
|
db_insert_func=self.db.insert_adset_insights,
|
||||||
date_preset=date_preset,
|
date_preset=date_preset,
|
||||||
cache_metadata=True, # Automatically cache adset/campaign from insights
|
limit=limit,
|
||||||
|
required_fields={"adset_id": "adset_id", "campaign_id": "campaign_id"},
|
||||||
)
|
)
|
||||||
count += 1
|
|
||||||
|
|
||||||
print(f" Ad set insights stored for {account_id} ({count} records)")
|
print(f" Ad set insights stored for {account_id} ({count} records)")
|
||||||
|
|
||||||
|
async def grab_campaign_insights_by_country(self, account_id: str, date_preset: str = "today", limit: int = 50):
|
||||||
|
"""
|
||||||
|
Grab and store campaign-level insights broken down by country.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
account_id: Ad account ID
|
||||||
|
date_preset: Meta date preset
|
||||||
|
limit: Maximum number of campaigns
|
||||||
|
"""
|
||||||
|
fields = common_fields + [
|
||||||
|
AdsInsights.Field.campaign_id,
|
||||||
|
AdsInsights.Field.campaign_name,
|
||||||
|
]
|
||||||
|
|
||||||
|
count, _ = await self._master_grab_insights(
|
||||||
|
account_id=account_id,
|
||||||
|
fields=fields,
|
||||||
|
level="campaign",
|
||||||
|
db_insert_func=self.db.insert_campaign_insights_by_country,
|
||||||
|
date_preset=date_preset,
|
||||||
|
breakdowns=[AdsInsights.Breakdowns.country],
|
||||||
|
limit=limit,
|
||||||
|
required_fields={"campaign_id": "campaign_id", "country": "country"},
|
||||||
|
)
|
||||||
|
|
||||||
|
print(f" Campaign insights by country stored for {account_id} ({count} records)")
|
||||||
|
|
||||||
async def grab_account_insights_for_date_range(
|
async def grab_account_insights_for_date_range(
|
||||||
self,
|
self,
|
||||||
account_id: str,
|
account_id: str,
|
||||||
@@ -644,48 +715,14 @@ class ScheduledInsightsGrabber:
|
|||||||
AdsInsights.Field.date_stop,
|
AdsInsights.Field.date_stop,
|
||||||
]
|
]
|
||||||
|
|
||||||
# Use time_range instead of date_preset for custom date ranges
|
count, _ = await self._master_grab_insights(
|
||||||
params = {
|
|
||||||
"time_range": {
|
|
||||||
"since": start_date.isoformat(),
|
|
||||||
"until": end_date.isoformat(),
|
|
||||||
},
|
|
||||||
"level": "account",
|
|
||||||
"time_increment": 1, # Daily breakdown
|
|
||||||
}
|
|
||||||
|
|
||||||
ad_account = AdAccount(account_id)
|
|
||||||
try:
|
|
||||||
insights = await self._rate_limited_request(
|
|
||||||
ad_account.get_insights,
|
|
||||||
fields=fields,
|
|
||||||
params=params,
|
|
||||||
)
|
|
||||||
except FacebookRequestError as e:
|
|
||||||
error_code = e.api_error_code()
|
|
||||||
if error_code in [190, 102]:
|
|
||||||
raise ValueError(f"Access token is invalid (error {error_code}): {e.api_error_message()}")
|
|
||||||
raise
|
|
||||||
|
|
||||||
# Get account timezone from database
|
|
||||||
account_timezone = await self._get_account_timezone(account_id)
|
|
||||||
|
|
||||||
# Store insights
|
|
||||||
count = 0
|
|
||||||
for insight in insights:
|
|
||||||
insight_dict = dict(insight)
|
|
||||||
|
|
||||||
# Compute appropriate timestamp based on date_start and account timezone
|
|
||||||
date_start_str = insight_dict.get("date_start")
|
|
||||||
timestamp = self._compute_timestamp(date_start_str, account_timezone)
|
|
||||||
|
|
||||||
await self.db.insert_account_insights(
|
|
||||||
time=timestamp,
|
|
||||||
account_id=account_id,
|
account_id=account_id,
|
||||||
data=insight_dict,
|
fields=fields,
|
||||||
date_preset="custom", # Indicate this was a custom date range
|
level="account",
|
||||||
|
db_insert_func=self.db.insert_account_insights,
|
||||||
|
start_date=start_date,
|
||||||
|
end_date=end_date,
|
||||||
)
|
)
|
||||||
count += 1
|
|
||||||
|
|
||||||
return count
|
return count
|
||||||
|
|
||||||
@@ -723,51 +760,16 @@ class ScheduledInsightsGrabber:
|
|||||||
AdsInsights.Field.date_stop,
|
AdsInsights.Field.date_stop,
|
||||||
]
|
]
|
||||||
|
|
||||||
params = {
|
count, _ = await self._master_grab_insights(
|
||||||
"time_range": {
|
|
||||||
"since": start_date.isoformat(),
|
|
||||||
"until": end_date.isoformat(),
|
|
||||||
},
|
|
||||||
"level": "campaign",
|
|
||||||
"time_increment": 1, # Daily breakdown
|
|
||||||
"limit": limit,
|
|
||||||
}
|
|
||||||
|
|
||||||
ad_account = AdAccount(account_id)
|
|
||||||
try:
|
|
||||||
insights = await self._rate_limited_request(
|
|
||||||
ad_account.get_insights,
|
|
||||||
fields=fields,
|
|
||||||
params=params,
|
|
||||||
)
|
|
||||||
except FacebookRequestError as e:
|
|
||||||
error_code = e.api_error_code()
|
|
||||||
if error_code in [190, 102]:
|
|
||||||
raise ValueError(f"Access token is invalid (error {error_code}): {e.api_error_message()}")
|
|
||||||
raise
|
|
||||||
|
|
||||||
# Get account timezone from database
|
|
||||||
account_timezone = await self._get_account_timezone(account_id)
|
|
||||||
|
|
||||||
# Store insights
|
|
||||||
count = 0
|
|
||||||
for insight in insights:
|
|
||||||
campaign_id = insight.get('campaign_id')
|
|
||||||
if campaign_id:
|
|
||||||
insight_dict = dict(insight)
|
|
||||||
|
|
||||||
date_start_str = insight_dict.get("date_start")
|
|
||||||
timestamp = self._compute_timestamp(date_start_str, account_timezone)
|
|
||||||
|
|
||||||
await self.db.insert_campaign_insights(
|
|
||||||
time=timestamp,
|
|
||||||
campaign_id=campaign_id,
|
|
||||||
account_id=account_id,
|
account_id=account_id,
|
||||||
data=insight_dict,
|
fields=fields,
|
||||||
date_preset="custom",
|
level="campaign",
|
||||||
cache_metadata=True,
|
db_insert_func=self.db.insert_campaign_insights,
|
||||||
|
start_date=start_date,
|
||||||
|
end_date=end_date,
|
||||||
|
limit=limit,
|
||||||
|
required_fields={"campaign_id": "campaign_id"},
|
||||||
)
|
)
|
||||||
count += 1
|
|
||||||
|
|
||||||
return count
|
return count
|
||||||
|
|
||||||
@@ -806,53 +808,16 @@ class ScheduledInsightsGrabber:
|
|||||||
AdsInsights.Field.date_stop,
|
AdsInsights.Field.date_stop,
|
||||||
]
|
]
|
||||||
|
|
||||||
params = {
|
count, _ = await self._master_grab_insights(
|
||||||
"time_range": {
|
|
||||||
"since": start_date.isoformat(),
|
|
||||||
"until": end_date.isoformat(),
|
|
||||||
},
|
|
||||||
"level": "adset",
|
|
||||||
"time_increment": 1, # Daily breakdown
|
|
||||||
"limit": limit,
|
|
||||||
}
|
|
||||||
|
|
||||||
ad_account = AdAccount(account_id)
|
|
||||||
try:
|
|
||||||
insights = await self._rate_limited_request(
|
|
||||||
ad_account.get_insights,
|
|
||||||
fields=fields,
|
|
||||||
params=params,
|
|
||||||
)
|
|
||||||
except FacebookRequestError as e:
|
|
||||||
error_code = e.api_error_code()
|
|
||||||
if error_code in [190, 102]:
|
|
||||||
raise ValueError(f"Access token is invalid (error {error_code}): {e.api_error_message()}")
|
|
||||||
raise
|
|
||||||
|
|
||||||
# Get account timezone from database
|
|
||||||
account_timezone = await self._get_account_timezone(account_id)
|
|
||||||
|
|
||||||
# Store insights
|
|
||||||
count = 0
|
|
||||||
for insight in insights:
|
|
||||||
adset_id = insight.get('adset_id')
|
|
||||||
campaign_id = insight.get('campaign_id')
|
|
||||||
if adset_id and campaign_id:
|
|
||||||
insight_dict = dict(insight)
|
|
||||||
|
|
||||||
date_start_str = insight_dict.get("date_start")
|
|
||||||
timestamp = self._compute_timestamp(date_start_str, account_timezone)
|
|
||||||
|
|
||||||
await self.db.insert_adset_insights(
|
|
||||||
time=timestamp,
|
|
||||||
adset_id=adset_id,
|
|
||||||
campaign_id=campaign_id,
|
|
||||||
account_id=account_id,
|
account_id=account_id,
|
||||||
data=insight_dict,
|
fields=fields,
|
||||||
date_preset="custom",
|
level="adset",
|
||||||
cache_metadata=True,
|
db_insert_func=self.db.insert_adset_insights,
|
||||||
|
start_date=start_date,
|
||||||
|
end_date=end_date,
|
||||||
|
limit=limit,
|
||||||
|
required_fields={"adset_id": "adset_id", "campaign_id": "campaign_id"},
|
||||||
)
|
)
|
||||||
count += 1
|
|
||||||
|
|
||||||
return count
|
return count
|
||||||
|
|
||||||
@@ -1019,6 +984,7 @@ class ScheduledInsightsGrabber:
|
|||||||
print("Grabbing today's insights...")
|
print("Grabbing today's insights...")
|
||||||
date_start = await self.grab_account_insights(account_id, date_preset="today")
|
date_start = await self.grab_account_insights(account_id, date_preset="today")
|
||||||
await self.grab_campaign_insights(account_id, date_preset="today", limit=50)
|
await self.grab_campaign_insights(account_id, date_preset="today", limit=50)
|
||||||
|
await self.grab_campaign_insights_by_country(account_id, date_preset="today", limit=50)
|
||||||
await self.grab_adset_insights(account_id, date_preset="today", limit=50)
|
await self.grab_adset_insights(account_id, date_preset="today", limit=50)
|
||||||
|
|
||||||
# Track today's date from first account
|
# Track today's date from first account
|
||||||
@@ -1071,6 +1037,7 @@ class ScheduledInsightsGrabber:
|
|||||||
print("Grabbing yesterday's insights...")
|
print("Grabbing yesterday's insights...")
|
||||||
await self.grab_account_insights(account_id, date_preset="yesterday")
|
await self.grab_account_insights(account_id, date_preset="yesterday")
|
||||||
await self.grab_campaign_insights(account_id, date_preset="yesterday", limit=50)
|
await self.grab_campaign_insights(account_id, date_preset="yesterday", limit=50)
|
||||||
|
await self.grab_campaign_insights_by_country(account_id, date_preset="yesterday", limit=50)
|
||||||
await self.grab_adset_insights(account_id, date_preset="yesterday", limit=50)
|
await self.grab_adset_insights(account_id, date_preset="yesterday", limit=50)
|
||||||
|
|
||||||
print(f"✓ Completed yesterday's data for {account_id}")
|
print(f"✓ Completed yesterday's data for {account_id}")
|
||||||
@@ -1122,6 +1089,17 @@ class ScheduledInsightsGrabber:
|
|||||||
print("\n" + "-" * 60)
|
print("\n" + "-" * 60)
|
||||||
self.rate_limiter.print_stats()
|
self.rate_limiter.print_stats()
|
||||||
|
|
||||||
|
# Refresh materialized views after new data has been inserted
|
||||||
|
if self.view_manager:
|
||||||
|
print("\n" + "-" * 60)
|
||||||
|
print("Refreshing materialized views...")
|
||||||
|
try:
|
||||||
|
await self.view_manager.refresh_all_views()
|
||||||
|
print("✓ Materialized views refreshed successfully")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"⚠️ Warning: Failed to refresh materialized views: {e}")
|
||||||
|
# Don't fail the entire cycle, just log the warning
|
||||||
|
|
||||||
print("\n" + "="*60)
|
print("\n" + "="*60)
|
||||||
print("COLLECTION CYCLE COMPLETE")
|
print("COLLECTION CYCLE COMPLETE")
|
||||||
print("="*60 + "\n")
|
print("="*60 + "\n")
|
||||||
@@ -1151,6 +1129,10 @@ class ScheduledInsightsGrabber:
|
|||||||
# Initialize database schema (idempotent - safe to run multiple times)
|
# Initialize database schema (idempotent - safe to run multiple times)
|
||||||
await self.db.initialize_schema()
|
await self.db.initialize_schema()
|
||||||
|
|
||||||
|
# Initialize view manager and create/ensure views exist
|
||||||
|
self.view_manager = ViewManager(self.db.pool)
|
||||||
|
await self.view_manager.initialize_views()
|
||||||
|
|
||||||
# Load all accessible ad accounts
|
# Load all accessible ad accounts
|
||||||
await self.load_ad_accounts()
|
await self.load_ad_accounts()
|
||||||
|
|
||||||
@@ -1201,6 +1183,16 @@ class ScheduledInsightsGrabber:
|
|||||||
|
|
||||||
async def async_main():
|
async def async_main():
|
||||||
"""Async main entry point for scheduled grabber."""
|
"""Async main entry point for scheduled grabber."""
|
||||||
|
# Configure logging
|
||||||
|
logging.basicConfig(
|
||||||
|
level=logging.INFO,
|
||||||
|
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
|
||||||
|
handlers=[
|
||||||
|
logging.FileHandler('meta_api_grabber.log'),
|
||||||
|
logging.StreamHandler()
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# Initialize with max_accounts=3 for conservative start
|
# Initialize with max_accounts=3 for conservative start
|
||||||
# Set max_accounts=None to process all accessible accounts
|
# Set max_accounts=None to process all accessible accounts
|
||||||
|
|||||||
91
src/meta_api_grabber/test_campaign_insights.py
Normal file
91
src/meta_api_grabber/test_campaign_insights.py
Normal file
@@ -0,0 +1,91 @@
|
|||||||
|
"""
|
||||||
|
Test to see what campaign insights API actually returns for campaign_name.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import json
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
from facebook_business.api import FacebookAdsApi
|
||||||
|
from facebook_business.adobjects.adaccount import AdAccount
|
||||||
|
from facebook_business.adobjects.adsinsights import AdsInsights
|
||||||
|
|
||||||
|
# Load environment variables
|
||||||
|
load_dotenv()
|
||||||
|
|
||||||
|
# Initialize Facebook Ads API
|
||||||
|
access_token = os.getenv("META_ACCESS_TOKEN")
|
||||||
|
app_secret = os.getenv("META_APP_SECRET")
|
||||||
|
app_id = os.getenv("META_APP_ID")
|
||||||
|
|
||||||
|
if not all([access_token, app_secret, app_id]):
|
||||||
|
raise ValueError("Missing required environment variables")
|
||||||
|
|
||||||
|
FacebookAdsApi.init(
|
||||||
|
app_id=app_id,
|
||||||
|
app_secret=app_secret,
|
||||||
|
access_token=access_token,
|
||||||
|
)
|
||||||
|
|
||||||
|
account_id = "act_238334370765317"
|
||||||
|
|
||||||
|
print(f"Testing campaign INSIGHTS fetch for account: {account_id}")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Get campaign insights (this is what scheduled_grabber.py does)
|
||||||
|
print("\nFetching campaign-level insights with 'today' preset...")
|
||||||
|
print("-" * 60)
|
||||||
|
|
||||||
|
fields = [
|
||||||
|
AdsInsights.Field.campaign_id,
|
||||||
|
AdsInsights.Field.campaign_name,
|
||||||
|
AdsInsights.Field.impressions,
|
||||||
|
AdsInsights.Field.clicks,
|
||||||
|
AdsInsights.Field.spend,
|
||||||
|
AdsInsights.Field.ctr,
|
||||||
|
AdsInsights.Field.cpc,
|
||||||
|
AdsInsights.Field.cpm,
|
||||||
|
AdsInsights.Field.reach,
|
||||||
|
AdsInsights.Field.actions,
|
||||||
|
AdsInsights.Field.date_start,
|
||||||
|
AdsInsights.Field.date_stop,
|
||||||
|
]
|
||||||
|
|
||||||
|
params = {
|
||||||
|
"date_preset": "today",
|
||||||
|
"level": "campaign",
|
||||||
|
"limit": 10,
|
||||||
|
}
|
||||||
|
|
||||||
|
ad_account = AdAccount(account_id)
|
||||||
|
insights = ad_account.get_insights(
|
||||||
|
fields=fields,
|
||||||
|
params=params,
|
||||||
|
)
|
||||||
|
|
||||||
|
insights_list = list(insights)
|
||||||
|
print(f"Found {len(insights_list)} campaign insights\n")
|
||||||
|
|
||||||
|
campaigns_without_names = []
|
||||||
|
for i, insight in enumerate(insights_list, 1):
|
||||||
|
insight_dict = dict(insight)
|
||||||
|
campaign_id = insight_dict.get('campaign_id')
|
||||||
|
campaign_name = insight_dict.get('campaign_name')
|
||||||
|
|
||||||
|
print(f"Campaign Insight {i}:")
|
||||||
|
print(f" Campaign ID: {campaign_id}")
|
||||||
|
print(f" Campaign Name: {campaign_name if campaign_name else '❌ MISSING'}")
|
||||||
|
print(f" Impressions: {insight_dict.get('impressions')}")
|
||||||
|
print(f" Spend: {insight_dict.get('spend')}")
|
||||||
|
print(f" Keys available: {list(insight_dict.keys())}")
|
||||||
|
print(f" Raw JSON: {json.dumps(insight_dict, indent=4, default=str)}")
|
||||||
|
print()
|
||||||
|
|
||||||
|
if not campaign_name:
|
||||||
|
campaigns_without_names.append(campaign_id)
|
||||||
|
|
||||||
|
print("=" * 60)
|
||||||
|
if campaigns_without_names:
|
||||||
|
print(f"⚠️ {len(campaigns_without_names)} campaigns WITHOUT names in insights:")
|
||||||
|
print(f" {campaigns_without_names}")
|
||||||
|
else:
|
||||||
|
print("✓ All campaigns have names in insights!")
|
||||||
118
src/meta_api_grabber/test_campaigns.py
Normal file
118
src/meta_api_grabber/test_campaigns.py
Normal file
@@ -0,0 +1,118 @@
|
|||||||
|
"""
|
||||||
|
Test script to diagnose campaign name issues for a specific ad account.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import json
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
from facebook_business.api import FacebookAdsApi
|
||||||
|
from facebook_business.adobjects.adaccount import AdAccount
|
||||||
|
from facebook_business.adobjects.campaign import Campaign
|
||||||
|
|
||||||
|
# Load environment variables
|
||||||
|
load_dotenv()
|
||||||
|
|
||||||
|
# Initialize Facebook Ads API
|
||||||
|
access_token = os.getenv("META_ACCESS_TOKEN")
|
||||||
|
app_secret = os.getenv("META_APP_SECRET")
|
||||||
|
app_id = os.getenv("META_APP_ID")
|
||||||
|
|
||||||
|
if not all([access_token, app_secret, app_id]):
|
||||||
|
raise ValueError("Missing required environment variables")
|
||||||
|
|
||||||
|
FacebookAdsApi.init(
|
||||||
|
app_id=app_id,
|
||||||
|
app_secret=app_secret,
|
||||||
|
access_token=access_token,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Test account ID
|
||||||
|
account_id = "act_238334370765317"
|
||||||
|
|
||||||
|
print(f"Testing campaign fetch for account: {account_id}")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Get campaigns using get_campaigns
|
||||||
|
print("\n1. Testing AdAccount.get_campaigns()...")
|
||||||
|
print("-" * 60)
|
||||||
|
|
||||||
|
ad_account = AdAccount(account_id)
|
||||||
|
campaigns = ad_account.get_campaigns(
|
||||||
|
fields=[
|
||||||
|
Campaign.Field.name,
|
||||||
|
Campaign.Field.status,
|
||||||
|
Campaign.Field.objective,
|
||||||
|
Campaign.Field.id,
|
||||||
|
],
|
||||||
|
params={'limit': 10}
|
||||||
|
)
|
||||||
|
|
||||||
|
campaigns_list = list(campaigns)
|
||||||
|
print(f"Found {len(campaigns_list)} campaigns\n")
|
||||||
|
|
||||||
|
campaigns_without_names = []
|
||||||
|
for i, campaign in enumerate(campaigns_list, 1):
|
||||||
|
campaign_dict = dict(campaign)
|
||||||
|
campaign_id = campaign_dict.get('id')
|
||||||
|
campaign_name = campaign_dict.get('name')
|
||||||
|
|
||||||
|
print(f"Campaign {i}:")
|
||||||
|
print(f" ID: {campaign_id}")
|
||||||
|
print(f" Name: {campaign_name if campaign_name else '❌ MISSING'}")
|
||||||
|
print(f" Status: {campaign_dict.get('status')}")
|
||||||
|
print(f" Objective: {campaign_dict.get('objective')}")
|
||||||
|
print(f" Raw data: {json.dumps(campaign_dict, indent=4)}")
|
||||||
|
print()
|
||||||
|
|
||||||
|
if not campaign_name:
|
||||||
|
campaigns_without_names.append(campaign_id)
|
||||||
|
|
||||||
|
# If we found campaigns without names, try fetching them individually
|
||||||
|
if campaigns_without_names:
|
||||||
|
print("\n2. Retrying campaigns without names (individual fetch)...")
|
||||||
|
print("-" * 60)
|
||||||
|
|
||||||
|
for campaign_id in campaigns_without_names:
|
||||||
|
print(f"\nFetching campaign {campaign_id} directly...")
|
||||||
|
try:
|
||||||
|
campaign_obj = Campaign(campaign_id)
|
||||||
|
campaign_data = campaign_obj.api_get(
|
||||||
|
fields=[
|
||||||
|
Campaign.Field.name,
|
||||||
|
Campaign.Field.status,
|
||||||
|
Campaign.Field.objective,
|
||||||
|
Campaign.Field.account_id,
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
campaign_dict = dict(campaign_data)
|
||||||
|
print(f" Direct fetch result:")
|
||||||
|
print(f" Name: {campaign_dict.get('name', '❌ STILL MISSING')}")
|
||||||
|
print(f" Status: {campaign_dict.get('status')}")
|
||||||
|
print(f" Raw data: {json.dumps(campaign_dict, indent=4)}")
|
||||||
|
except Exception as e:
|
||||||
|
print(f" Error fetching campaign: {e}")
|
||||||
|
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print(f"Summary: {len(campaigns_without_names)} out of {len(campaigns_list)} campaigns have missing names")
|
||||||
|
print(f"Missing campaign IDs: {campaigns_without_names}")
|
||||||
|
else:
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("✓ All campaigns have names!")
|
||||||
|
|
||||||
|
print("\n3. Checking account permissions...")
|
||||||
|
print("-" * 60)
|
||||||
|
try:
|
||||||
|
account_data = ad_account.api_get(
|
||||||
|
fields=['name', 'account_status', 'capabilities', 'business']
|
||||||
|
)
|
||||||
|
account_dict = dict(account_data)
|
||||||
|
print(f"Account Name: {account_dict.get('name')}")
|
||||||
|
print(f"Account Status: {account_dict.get('account_status')}")
|
||||||
|
print(f"Capabilities: {json.dumps(account_dict.get('capabilities', []), indent=2)}")
|
||||||
|
print(f"Business ID: {account_dict.get('business')}")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error fetching account details: {e}")
|
||||||
|
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("Test complete!")
|
||||||
132
src/meta_api_grabber/test_google_ads_accounts.py
Normal file
132
src/meta_api_grabber/test_google_ads_accounts.py
Normal file
@@ -0,0 +1,132 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Test script to grab ad accounts from Google Ads API.
|
||||||
|
|
||||||
|
This script reads configuration from google-ads.yaml and authenticates using
|
||||||
|
a service account JSON key file to retrieve accessible customer accounts.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from google.ads.googleads.client import GoogleAdsClient
|
||||||
|
from google.ads.googleads.errors import GoogleAdsException
|
||||||
|
import sys
|
||||||
|
|
||||||
|
|
||||||
|
def list_accessible_customers(client):
|
||||||
|
"""Lists all customer IDs accessible to the authenticated user.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
client: An initialized GoogleAdsClient instance.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of customer resource names.
|
||||||
|
"""
|
||||||
|
customer_service = client.get_service("CustomerService")
|
||||||
|
|
||||||
|
try:
|
||||||
|
accessible_customers = customer_service.list_accessible_customers()
|
||||||
|
print(f"\nFound {len(accessible_customers.resource_names)} accessible customers:")
|
||||||
|
|
||||||
|
for resource_name in accessible_customers.resource_names:
|
||||||
|
customer_id = resource_name.split('/')[-1]
|
||||||
|
print(f" - Customer ID: {customer_id}")
|
||||||
|
print(f" Resource Name: {resource_name}")
|
||||||
|
|
||||||
|
return accessible_customers.resource_names
|
||||||
|
|
||||||
|
except GoogleAdsException as ex:
|
||||||
|
print(f"Request failed with status {ex.error.code().name}")
|
||||||
|
for error in ex.failure.errors:
|
||||||
|
print(f"\tError: {error.message}")
|
||||||
|
if error.location:
|
||||||
|
for field in error.location.field_path_elements:
|
||||||
|
print(f"\t\tField: {field.field_name}")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
|
def get_customer_details(client, customer_id):
|
||||||
|
"""Retrieves detailed information about a customer account.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
client: An initialized GoogleAdsClient instance.
|
||||||
|
customer_id: The customer ID (without dashes).
|
||||||
|
"""
|
||||||
|
ga_service = client.get_service("GoogleAdsService")
|
||||||
|
|
||||||
|
query = """
|
||||||
|
SELECT
|
||||||
|
customer.id,
|
||||||
|
customer.descriptive_name,
|
||||||
|
customer.currency_code,
|
||||||
|
customer.time_zone,
|
||||||
|
customer.manager,
|
||||||
|
customer.test_account
|
||||||
|
FROM customer
|
||||||
|
WHERE customer.id = {customer_id}
|
||||||
|
""".format(customer_id=customer_id)
|
||||||
|
|
||||||
|
try:
|
||||||
|
response = ga_service.search(customer_id=customer_id, query=query)
|
||||||
|
|
||||||
|
for row in response:
|
||||||
|
customer = row.customer
|
||||||
|
print(f"\n--- Customer Details for {customer_id} ---")
|
||||||
|
print(f" ID: {customer.id}")
|
||||||
|
print(f" Name: {customer.descriptive_name}")
|
||||||
|
print(f" Currency: {customer.currency_code}")
|
||||||
|
print(f" Time Zone: {customer.time_zone}")
|
||||||
|
print(f" Is Manager: {customer.manager}")
|
||||||
|
print(f" Is Test Account: {customer.test_account}")
|
||||||
|
|
||||||
|
except GoogleAdsException as ex:
|
||||||
|
print(f"\nFailed to get details for customer {customer_id}")
|
||||||
|
print(f"Status: {ex.error.code().name}")
|
||||||
|
for error in ex.failure.errors:
|
||||||
|
print(f" Error: {error.message}")
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""Main function to test Google Ads API connection and list accounts."""
|
||||||
|
|
||||||
|
print("=" * 60)
|
||||||
|
print("Google Ads API - Account Listing Test")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Load client from YAML configuration
|
||||||
|
# By default, this looks for google-ads.yaml in the current directory
|
||||||
|
# or in the home directory
|
||||||
|
try:
|
||||||
|
print("\nLoading Google Ads client from configuration...")
|
||||||
|
client = GoogleAdsClient.load_from_storage(path="google-ads.yaml")
|
||||||
|
print("✓ Client loaded successfully")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"✗ Failed to load client: {e}")
|
||||||
|
print("\nPlease ensure:")
|
||||||
|
print(" 1. google-ads.yaml exists and is properly configured")
|
||||||
|
print(" 2. google_ads_key.json exists and contains valid credentials")
|
||||||
|
print(" 3. All required fields are filled in google-ads.yaml")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
# List accessible customers
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("Listing Accessible Customers")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
resource_names = list_accessible_customers(client)
|
||||||
|
|
||||||
|
# Get detailed information for each customer
|
||||||
|
if resource_names:
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("Customer Details")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
for resource_name in resource_names:
|
||||||
|
customer_id = resource_name.split('/')[-1]
|
||||||
|
get_customer_details(client, customer_id)
|
||||||
|
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("Test completed successfully!")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
171
src/meta_api_grabber/test_page_leads.py
Normal file
171
src/meta_api_grabber/test_page_leads.py
Normal file
@@ -0,0 +1,171 @@
|
|||||||
|
"""
|
||||||
|
Test script to retrieve leads from a specific Facebook page.
|
||||||
|
This uses the existing Meta API credentials to test leads retrieval.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import os
|
||||||
|
from datetime import datetime
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
from facebook_business.api import FacebookAdsApi
|
||||||
|
from facebook_business.adobjects.page import Page
|
||||||
|
|
||||||
|
|
||||||
|
async def test_page_leads():
|
||||||
|
"""Test retrieving leads from a specific Facebook page."""
|
||||||
|
load_dotenv()
|
||||||
|
|
||||||
|
# Get credentials from environment
|
||||||
|
access_token = os.getenv("META_ACCESS_TOKEN")
|
||||||
|
app_secret = os.getenv("META_APP_SECRET")
|
||||||
|
app_id = os.getenv("META_APP_ID")
|
||||||
|
|
||||||
|
if not all([access_token, app_secret, app_id]):
|
||||||
|
print("❌ Missing required environment variables")
|
||||||
|
print(" Please ensure META_ACCESS_TOKEN, META_APP_SECRET, and META_APP_ID")
|
||||||
|
print(" are set in .env")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
print("="*60)
|
||||||
|
print("PAGE LEADS TEST - RETRIEVING LEADS FROM A SPECIFIC PAGE")
|
||||||
|
print("="*60)
|
||||||
|
print()
|
||||||
|
|
||||||
|
# Initialize Facebook Ads API
|
||||||
|
FacebookAdsApi.init(
|
||||||
|
app_id=app_id,
|
||||||
|
app_secret=app_secret,
|
||||||
|
access_token=access_token,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Prompt for page ID
|
||||||
|
page_id = input("Enter the Facebook Page ID: ").strip()
|
||||||
|
|
||||||
|
if not page_id:
|
||||||
|
print("❌ No page ID provided")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Initialize the Page object
|
||||||
|
print(f"\nConnecting to Page {page_id}...")
|
||||||
|
page = Page(fbid=page_id)
|
||||||
|
|
||||||
|
# First, get basic page information to verify access
|
||||||
|
print("\nFetching page information...")
|
||||||
|
page_fields = ['name', 'id', 'access_token']
|
||||||
|
page_info = page.api_get(fields=page_fields)
|
||||||
|
|
||||||
|
print(f"\n✓ Page Information:")
|
||||||
|
print(f" Name: {page_info.get('name', 'N/A')}")
|
||||||
|
print(f" ID: {page_info.get('id', 'N/A')}")
|
||||||
|
print(f" Has Access Token: {'Yes' if page_info.get('access_token') else 'No'}")
|
||||||
|
|
||||||
|
# Get leadgen forms associated with this page
|
||||||
|
print("\n" + "="*60)
|
||||||
|
print("Fetching Lead Generation Forms...")
|
||||||
|
print("="*60)
|
||||||
|
|
||||||
|
leadgen_forms = page.get_lead_gen_forms(
|
||||||
|
fields=['id', 'name', 'status', 'leads_count', 'created_time']
|
||||||
|
)
|
||||||
|
|
||||||
|
if not leadgen_forms or len(leadgen_forms) == 0:
|
||||||
|
print("\n⚠️ No lead generation forms found for this page")
|
||||||
|
print(" This could mean:")
|
||||||
|
print(" 1. The page has no lead forms")
|
||||||
|
print(" 2. The access token doesn't have permission to view lead forms")
|
||||||
|
print(" 3. The page ID is incorrect")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
print(f"\nFound {len(leadgen_forms)} lead generation form(s):\n")
|
||||||
|
|
||||||
|
total_leads = 0
|
||||||
|
for idx, form in enumerate(leadgen_forms, 1):
|
||||||
|
form_id = form.get('id')
|
||||||
|
form_name = form.get('name', 'N/A')
|
||||||
|
form_status = form.get('status', 'N/A')
|
||||||
|
leads_count = form.get('leads_count', 0)
|
||||||
|
created_time = form.get('created_time', 'N/A')
|
||||||
|
|
||||||
|
print(f"Form {idx}:")
|
||||||
|
print(f" ID: {form_id}")
|
||||||
|
print(f" Name: {form_name}")
|
||||||
|
print(f" Status: {form_status}")
|
||||||
|
print(f" Leads Count: {leads_count}")
|
||||||
|
print(f" Created: {created_time}")
|
||||||
|
|
||||||
|
# Try to fetch actual leads from this form
|
||||||
|
try:
|
||||||
|
print(f"\n Fetching leads from form '{form_name}'...")
|
||||||
|
|
||||||
|
# Get the form object to retrieve leads
|
||||||
|
from facebook_business.adobjects.leadgenform import LeadgenForm
|
||||||
|
form_obj = LeadgenForm(fbid=form_id)
|
||||||
|
|
||||||
|
leads = form_obj.get_leads(
|
||||||
|
fields=['id', 'created_time', 'field_data']
|
||||||
|
)
|
||||||
|
|
||||||
|
leads_list = list(leads)
|
||||||
|
print(f" ✓ Retrieved {len(leads_list)} lead(s)")
|
||||||
|
|
||||||
|
if leads_list:
|
||||||
|
print(f"\n Sample leads from '{form_name}':")
|
||||||
|
for lead_idx, lead in enumerate(leads_list[:5], 1): # Show first 5 leads
|
||||||
|
lead_id = lead.get('id')
|
||||||
|
lead_created = lead.get('created_time', 'N/A')
|
||||||
|
field_data = lead.get('field_data', [])
|
||||||
|
|
||||||
|
print(f"\n Lead {lead_idx}:")
|
||||||
|
print(f" ID: {lead_id}")
|
||||||
|
print(f" Created: {lead_created}")
|
||||||
|
print(f" Fields:")
|
||||||
|
|
||||||
|
for field in field_data:
|
||||||
|
field_name = field.get('name', 'unknown')
|
||||||
|
field_values = field.get('values', [])
|
||||||
|
print(f" {field_name}: {', '.join(field_values)}")
|
||||||
|
|
||||||
|
if len(leads_list) > 5:
|
||||||
|
print(f"\n ... and {len(leads_list) - 5} more lead(s)")
|
||||||
|
|
||||||
|
total_leads += len(leads_list)
|
||||||
|
|
||||||
|
except Exception as lead_error:
|
||||||
|
print(f" ❌ Error fetching leads: {lead_error}")
|
||||||
|
|
||||||
|
print()
|
||||||
|
|
||||||
|
print("="*60)
|
||||||
|
print("TEST COMPLETED")
|
||||||
|
print("="*60)
|
||||||
|
print(f"\n✓ Total forms found: {len(leadgen_forms)}")
|
||||||
|
print(f"✓ Total leads retrieved: {total_leads}")
|
||||||
|
|
||||||
|
if total_leads == 0:
|
||||||
|
print("\n⚠️ No leads were retrieved. This could mean:")
|
||||||
|
print(" 1. The forms have no leads yet")
|
||||||
|
print(" 2. Your access token needs 'leads_retrieval' permission")
|
||||||
|
print(" 3. You need to request advanced access for leads_retrieval")
|
||||||
|
print("\nRequired permissions:")
|
||||||
|
print(" - pages_manage_ads")
|
||||||
|
print(" - pages_read_engagement")
|
||||||
|
print(" - leads_retrieval")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"\n❌ Error: {e}")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
return 1
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""Entry point for the test script."""
|
||||||
|
exit_code = asyncio.run(test_page_leads())
|
||||||
|
exit(exit_code)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
108
src/meta_api_grabber/test_rate_limiter_issue.py
Normal file
108
src/meta_api_grabber/test_rate_limiter_issue.py
Normal file
@@ -0,0 +1,108 @@
|
|||||||
|
"""
|
||||||
|
Test to diagnose if the rate limiter is causing campaign name issues.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import asyncio
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
from facebook_business.api import FacebookAdsApi
|
||||||
|
from facebook_business.adobjects.adaccount import AdAccount
|
||||||
|
from facebook_business.adobjects.campaign import Campaign
|
||||||
|
|
||||||
|
from rate_limiter import MetaRateLimiter
|
||||||
|
|
||||||
|
# Load environment variables
|
||||||
|
load_dotenv()
|
||||||
|
|
||||||
|
# Initialize Facebook Ads API
|
||||||
|
access_token = os.getenv("META_ACCESS_TOKEN")
|
||||||
|
app_secret = os.getenv("META_APP_SECRET")
|
||||||
|
app_id = os.getenv("META_APP_ID")
|
||||||
|
|
||||||
|
if not all([access_token, app_secret, app_id]):
|
||||||
|
raise ValueError("Missing required environment variables")
|
||||||
|
|
||||||
|
FacebookAdsApi.init(
|
||||||
|
app_id=app_id,
|
||||||
|
app_secret=app_secret,
|
||||||
|
access_token=access_token,
|
||||||
|
)
|
||||||
|
|
||||||
|
account_id = "act_238334370765317"
|
||||||
|
|
||||||
|
# Initialize rate limiter
|
||||||
|
rate_limiter = MetaRateLimiter(
|
||||||
|
base_delay=0.1, # Fast for testing
|
||||||
|
throttle_threshold=75.0,
|
||||||
|
max_retry_delay=300.0,
|
||||||
|
max_retries=5,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
async def _rate_limited_request(func, *args, **kwargs):
|
||||||
|
"""Execute a request with rate limiting (same as in scheduled_grabber.py)."""
|
||||||
|
return await rate_limiter.execute_with_retry(func, *args, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
async def test_direct_fetch():
|
||||||
|
"""Test direct fetch without rate limiter."""
|
||||||
|
print("1. DIRECT FETCH (no rate limiter)")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
ad_account = AdAccount(account_id)
|
||||||
|
campaigns = ad_account.get_campaigns(
|
||||||
|
fields=[
|
||||||
|
Campaign.Field.name,
|
||||||
|
Campaign.Field.status,
|
||||||
|
Campaign.Field.objective,
|
||||||
|
],
|
||||||
|
params={'limit': 5}
|
||||||
|
)
|
||||||
|
|
||||||
|
print(f"Type of campaigns: {type(campaigns)}")
|
||||||
|
print(f"Campaigns object: {campaigns}\n")
|
||||||
|
|
||||||
|
for i, campaign in enumerate(campaigns, 1):
|
||||||
|
campaign_dict = dict(campaign)
|
||||||
|
print(f"Campaign {i}:")
|
||||||
|
print(f" ID: {campaign_dict.get('id')}")
|
||||||
|
print(f" Name: {campaign_dict.get('name', '❌ MISSING')}")
|
||||||
|
print(f" Keys in dict: {list(campaign_dict.keys())}")
|
||||||
|
print()
|
||||||
|
|
||||||
|
|
||||||
|
async def test_rate_limited_fetch():
|
||||||
|
"""Test fetch WITH rate limiter."""
|
||||||
|
print("\n2. RATE LIMITED FETCH (with rate limiter)")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
ad_account = AdAccount(account_id)
|
||||||
|
campaigns = await _rate_limited_request(
|
||||||
|
ad_account.get_campaigns,
|
||||||
|
fields=[
|
||||||
|
Campaign.Field.name,
|
||||||
|
Campaign.Field.status,
|
||||||
|
Campaign.Field.objective,
|
||||||
|
],
|
||||||
|
params={'limit': 5}
|
||||||
|
)
|
||||||
|
|
||||||
|
print(f"Type of campaigns: {type(campaigns)}")
|
||||||
|
print(f"Campaigns object: {campaigns}\n")
|
||||||
|
|
||||||
|
for i, campaign in enumerate(campaigns, 1):
|
||||||
|
campaign_dict = dict(campaign)
|
||||||
|
print(f"Campaign {i}:")
|
||||||
|
print(f" ID: {campaign_dict.get('id')}")
|
||||||
|
print(f" Name: {campaign_dict.get('name', '❌ MISSING')}")
|
||||||
|
print(f" Keys in dict: {list(campaign_dict.keys())}")
|
||||||
|
print()
|
||||||
|
|
||||||
|
|
||||||
|
async def main():
|
||||||
|
await test_direct_fetch()
|
||||||
|
await test_rate_limited_fetch()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
asyncio.run(main())
|
||||||
118
src/meta_api_grabber/view_manager.py
Normal file
118
src/meta_api_grabber/view_manager.py
Normal file
@@ -0,0 +1,118 @@
|
|||||||
|
"""
|
||||||
|
View manager for TimescaleDB materialized views.
|
||||||
|
Handles creation, updates, and refresh of materialized views for flattened insights data.
|
||||||
|
Views are loaded from individual SQL files in the views directory.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import pathlib
|
||||||
|
from typing import List, Optional
|
||||||
|
|
||||||
|
import asyncpg
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class ViewManager:
|
||||||
|
"""Manages materialized views for insights data flattening."""
|
||||||
|
|
||||||
|
def __init__(self, pool: asyncpg.Pool):
|
||||||
|
"""
|
||||||
|
Initialize view manager with a database connection pool.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
pool: asyncpg connection pool
|
||||||
|
"""
|
||||||
|
self.pool = pool
|
||||||
|
self.views_dir = pathlib.Path(__file__).parent / "views"
|
||||||
|
|
||||||
|
async def initialize_views(self) -> None:
|
||||||
|
"""
|
||||||
|
Initialize all materialized views at startup.
|
||||||
|
Loads and executes SQL files from the views directory in alphabetical order.
|
||||||
|
Creates views if they don't exist, idempotent operation.
|
||||||
|
"""
|
||||||
|
logger.info("Initializing materialized views...")
|
||||||
|
|
||||||
|
if not self.views_dir.exists():
|
||||||
|
logger.warning(f"Views directory not found at {self.views_dir}")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Get all .sql files in alphabetical order for consistent execution
|
||||||
|
view_files = sorted(self.views_dir.glob("*.sql"))
|
||||||
|
if not view_files:
|
||||||
|
logger.warning(f"No SQL files found in {self.views_dir}")
|
||||||
|
return
|
||||||
|
|
||||||
|
async with self.pool.acquire() as conn:
|
||||||
|
for view_file in view_files:
|
||||||
|
logger.debug(f"Loading view file: {view_file.name}")
|
||||||
|
await self._execute_view_file(conn, view_file)
|
||||||
|
|
||||||
|
logger.info("✓ Materialized views initialized successfully")
|
||||||
|
|
||||||
|
async def _execute_view_file(self, conn: asyncpg.Connection, view_file: pathlib.Path) -> None:
|
||||||
|
"""
|
||||||
|
Execute SQL statements from a view file.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
conn: asyncpg connection
|
||||||
|
view_file: Path to SQL file
|
||||||
|
"""
|
||||||
|
with open(view_file, 'r') as f:
|
||||||
|
view_sql = f.read()
|
||||||
|
|
||||||
|
statements = [s.strip() for s in view_sql.split(';') if s.strip()]
|
||||||
|
|
||||||
|
for i, stmt in enumerate(statements, 1):
|
||||||
|
if not stmt:
|
||||||
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
await conn.execute(stmt)
|
||||||
|
logger.debug(f"{view_file.name}: Executed statement {i}")
|
||||||
|
except Exception as e:
|
||||||
|
error_msg = str(e).lower()
|
||||||
|
if "does not exist" in error_msg:
|
||||||
|
# Could be a missing dependent view or table, log it
|
||||||
|
logger.debug(f"{view_file.name}: View or table does not exist (statement {i})")
|
||||||
|
else:
|
||||||
|
# Log other errors but don't fail - could be incompatible schema changes
|
||||||
|
logger.warning(f"{view_file.name}: Error in statement {i}: {e}")
|
||||||
|
|
||||||
|
async def refresh_views(self, view_names: Optional[List[str]] = None) -> None:
|
||||||
|
"""
|
||||||
|
Refresh specified materialized views.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
view_names: List of view names to refresh. If None, refreshes all views.
|
||||||
|
"""
|
||||||
|
if view_names is None:
|
||||||
|
view_names = [
|
||||||
|
"adset_insights_flattened",
|
||||||
|
"account_insights_flattened",
|
||||||
|
"campaign_insights_flattened",
|
||||||
|
"campaign_insights_by_country_flattened",
|
||||||
|
#"campaign_insights_by_device_flattened",
|
||||||
|
#"campaign_insights_by_gender_flattened",
|
||||||
|
]
|
||||||
|
|
||||||
|
async with self.pool.acquire() as conn:
|
||||||
|
for view_name in view_names:
|
||||||
|
try:
|
||||||
|
# Use CONCURRENTLY to avoid locking
|
||||||
|
await conn.execute(
|
||||||
|
f"REFRESH MATERIALIZED VIEW CONCURRENTLY {view_name};"
|
||||||
|
)
|
||||||
|
logger.debug(f"Refreshed materialized view: {view_name}")
|
||||||
|
except Exception as e:
|
||||||
|
error_msg = str(e).lower()
|
||||||
|
# View might not exist if not initialized, that's okay
|
||||||
|
if "does not exist" in error_msg:
|
||||||
|
logger.debug(f"View does not exist, skipping refresh: {view_name}")
|
||||||
|
else:
|
||||||
|
logger.warning(f"Error refreshing view {view_name}: {e}")
|
||||||
|
|
||||||
|
async def refresh_all_views(self) -> None:
|
||||||
|
"""Refresh all materialized views."""
|
||||||
|
await self.refresh_views()
|
||||||
25
src/meta_api_grabber/views/SETUP_PERMISSIONS.md
Normal file
25
src/meta_api_grabber/views/SETUP_PERMISSIONS.md
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
# View Permissions Setup
|
||||||
|
|
||||||
|
The scheduled grabber needs the `meta_user` to have permissions to create/drop/modify views.
|
||||||
|
|
||||||
|
## One-time Setup (run as superuser or database owner)
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Give meta_user the ability to create views in the public schema
|
||||||
|
GRANT CREATE ON SCHEMA public TO meta_user;
|
||||||
|
|
||||||
|
-- Alternative: Make meta_user the owner of all views (if they already exist)
|
||||||
|
-- ALTER MATERIALIZED VIEW account_insights_flattened OWNER TO meta_user;
|
||||||
|
-- ALTER MATERIALIZED VIEW campaign_insights_flattened OWNER TO meta_user;
|
||||||
|
-- ALTER MATERIALIZED VIEW adset_insights_flattened OWNER TO meta_user;
|
||||||
|
```
|
||||||
|
|
||||||
|
Run these commands once as a superuser/database owner, then the scheduled grabber can manage views normally.
|
||||||
|
|
||||||
|
## Why This Is Needed
|
||||||
|
|
||||||
|
PostgreSQL materialized views must be owned by the user who created them. Since the scheduled grabber recreates views on startup (to apply schema changes), it needs permission to:
|
||||||
|
- `DROP MATERIALIZED VIEW` - remove old views
|
||||||
|
- `CREATE MATERIALIZED VIEW` - create new views
|
||||||
|
|
||||||
|
Without proper schema permissions, the `meta_user` cannot perform these operations.
|
||||||
44
src/meta_api_grabber/views/account_insights.sql
Normal file
44
src/meta_api_grabber/views/account_insights.sql
Normal file
@@ -0,0 +1,44 @@
|
|||||||
|
DROP MATERIALIZED VIEW IF EXISTS account_insights_flattened CASCADE;
|
||||||
|
|
||||||
|
CREATE MATERIALIZED VIEW account_insights_flattened AS
|
||||||
|
SELECT
|
||||||
|
time,
|
||||||
|
account_id,
|
||||||
|
impressions,
|
||||||
|
clicks,
|
||||||
|
spend,
|
||||||
|
reach,
|
||||||
|
frequency,
|
||||||
|
ctr,
|
||||||
|
cpc,
|
||||||
|
cpm,
|
||||||
|
cpp,
|
||||||
|
date_preset,
|
||||||
|
date_start,
|
||||||
|
date_stop,
|
||||||
|
fetched_at,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'link_click') AS link_click,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'landing_page_view') AS landing_page_view,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'lead') AS lead,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(cost_per_action_type)
|
||||||
|
WHERE value->>'action_type' = 'link_click') AS cost_per_link_click,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(cost_per_action_type)
|
||||||
|
WHERE value->>'action_type' = 'landing_page_view') AS cost_per_landing_page_view,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(cost_per_action_type)
|
||||||
|
WHERE value->>'action_type' = 'lead') AS cost_per_lead
|
||||||
|
FROM account_insights;
|
||||||
|
|
||||||
|
CREATE INDEX idx_account_insights_flat_date ON account_insights_flattened(date_start, date_stop);
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX idx_account_insights_flat_unique ON account_insights_flattened(time, account_id);
|
||||||
|
|
||||||
|
REFRESH MATERIALIZED VIEW CONCURRENTLY account_insights_flattened;
|
||||||
20
src/meta_api_grabber/views/account_insights_by_device.sql
Normal file
20
src/meta_api_grabber/views/account_insights_by_device.sql
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
--- account insights by gender
|
||||||
|
|
||||||
|
|
||||||
|
DROP VIEW IF EXISTS account_insights_by_device CASCADE;
|
||||||
|
|
||||||
|
CREATE VIEW account_insights_by_device AS
|
||||||
|
SELECT
|
||||||
|
time,
|
||||||
|
account_id,
|
||||||
|
device_platform,
|
||||||
|
SUM(impressions) AS impressions,
|
||||||
|
SUM(clicks) AS clicks,
|
||||||
|
SUM(spend) AS spend,
|
||||||
|
SUM(link_click) AS link_click,
|
||||||
|
SUM(landing_page_view) AS landing_page_view,
|
||||||
|
SUM(lead) AS lead
|
||||||
|
FROM campaign_insights_by_device_flattened
|
||||||
|
GROUP BY time, account_id, device_platform;
|
||||||
|
|
||||||
|
|
||||||
53
src/meta_api_grabber/views/account_insights_by_gender.sql
Normal file
53
src/meta_api_grabber/views/account_insights_by_gender.sql
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
DROP VIEW IF EXISTS account_insights_by_gender CASCADE;
|
||||||
|
|
||||||
|
CREATE VIEW account_insights_by_gender AS
|
||||||
|
SELECT
|
||||||
|
time,
|
||||||
|
account_id,
|
||||||
|
gender,
|
||||||
|
SUM(impressions) AS impressions,
|
||||||
|
SUM(clicks) AS clicks,
|
||||||
|
SUM(spend) AS spend,
|
||||||
|
SUM(link_click) AS link_click,
|
||||||
|
SUM(landing_page_view) AS landing_page_view,
|
||||||
|
SUM(lead) AS lead
|
||||||
|
FROM campaign_insights_by_gender
|
||||||
|
GROUP BY time, account_id, gender;
|
||||||
|
|
||||||
|
|
||||||
|
DROP VIEW IF EXISTS account_insights_by_age CASCADE;
|
||||||
|
|
||||||
|
CREATE VIEW account_insights_by_age AS
|
||||||
|
SELECT
|
||||||
|
time,
|
||||||
|
account_id,
|
||||||
|
age,
|
||||||
|
SUM(impressions) AS impressions,
|
||||||
|
SUM(clicks) AS clicks,
|
||||||
|
SUM(spend) AS spend,
|
||||||
|
SUM(link_click) AS link_click,
|
||||||
|
SUM(landing_page_view) AS landing_page_view,
|
||||||
|
SUM(lead) AS lead
|
||||||
|
FROM campaign_insights_by_age
|
||||||
|
GROUP BY time, account_id, age;
|
||||||
|
|
||||||
|
DROP VIEW IF EXISTS account_insights_by_gender_and_age CASCADE;
|
||||||
|
|
||||||
|
CREATE VIEW account_insights_by_gender_and_age AS
|
||||||
|
SELECT
|
||||||
|
time,
|
||||||
|
account_id,
|
||||||
|
gender,
|
||||||
|
age,
|
||||||
|
SUM(impressions) AS impressions,
|
||||||
|
SUM(clicks) AS clicks,
|
||||||
|
SUM(spend) AS spend,
|
||||||
|
SUM(link_click) AS link_click,
|
||||||
|
SUM(landing_page_view) AS landing_page_view,
|
||||||
|
SUM(lead) AS lead
|
||||||
|
FROM campaign_insights_by_gender_and_age
|
||||||
|
GROUP BY time, account_id, age, gender;
|
||||||
|
|
||||||
55
src/meta_api_grabber/views/account_insights_google.sql
Normal file
55
src/meta_api_grabber/views/account_insights_google.sql
Normal file
@@ -0,0 +1,55 @@
|
|||||||
|
DROP VIEW IF EXISTS g_account_insights CASCADE;
|
||||||
|
CREATE VIEW g_account_insights AS
|
||||||
|
SELECT
|
||||||
|
time,
|
||||||
|
account_id,
|
||||||
|
clicks,
|
||||||
|
impressions,
|
||||||
|
interactions,
|
||||||
|
cost_micros,
|
||||||
|
cost_micros / 1000000.0 as cost,
|
||||||
|
leads,
|
||||||
|
engagements,
|
||||||
|
customer_currency_code,
|
||||||
|
account_name,
|
||||||
|
|
||||||
|
-- CTR (Click-Through Rate)
|
||||||
|
(clicks::numeric / impressions_nz) * 100 as ctr,
|
||||||
|
|
||||||
|
-- CPM (Cost Per Mille) in micros and standard units
|
||||||
|
(cost_micros::numeric / impressions_nz) * 1000 as cpm_micros,
|
||||||
|
(cost_micros::numeric / impressions_nz) * 1000 / 1000000.0 as cpm,
|
||||||
|
|
||||||
|
-- CPC (Cost Per Click) in micros and standard units
|
||||||
|
cost_micros::numeric / clicks_nz as cpc_micros,
|
||||||
|
cost_micros::numeric / clicks_nz / 1000000.0 as cpc,
|
||||||
|
|
||||||
|
-- CPL (Cost Per Lead) in micros and standard units
|
||||||
|
cost_micros::numeric / leads_nz as cpl_micros,
|
||||||
|
cost_micros::numeric / leads_nz / 1000000.0 as cpl,
|
||||||
|
|
||||||
|
-- Conversion Rate
|
||||||
|
(leads::numeric / clicks_nz) * 100 as conversion_rate,
|
||||||
|
|
||||||
|
-- Engagement Rate
|
||||||
|
(engagements::numeric / impressions_nz) * 100 as engagement_rate
|
||||||
|
|
||||||
|
FROM (
|
||||||
|
SELECT
|
||||||
|
segments_date as time,
|
||||||
|
customer_id as account_id,
|
||||||
|
sum(metrics_clicks) as clicks,
|
||||||
|
sum(metrics_impressions) as impressions,
|
||||||
|
sum(metrics_interactions) as interactions,
|
||||||
|
sum(metrics_cost_micros) as cost_micros,
|
||||||
|
sum(metrics_conversions) as leads,
|
||||||
|
sum(metrics_engagements) as engagements,
|
||||||
|
customer_currency_code,
|
||||||
|
customer_descriptive_name as account_name,
|
||||||
|
-- Null-safe denominators
|
||||||
|
NULLIF(sum(metrics_clicks), 0) as clicks_nz,
|
||||||
|
NULLIF(sum(metrics_impressions), 0) as impressions_nz,
|
||||||
|
NULLIF(sum(metrics_conversions), 0) as leads_nz
|
||||||
|
FROM google.account_performance_report
|
||||||
|
GROUP BY account_id, time, customer_currency_code, account_name
|
||||||
|
) base;
|
||||||
38
src/meta_api_grabber/views/adset_insights.sql
Normal file
38
src/meta_api_grabber/views/adset_insights.sql
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
DROP MATERIALIZED VIEW IF EXISTS adset_insights_flattened CASCADE;
|
||||||
|
|
||||||
|
CREATE MATERIALIZED VIEW adset_insights_flattened AS
|
||||||
|
SELECT
|
||||||
|
time,
|
||||||
|
adset_id,
|
||||||
|
campaign_id,
|
||||||
|
account_id,
|
||||||
|
impressions,
|
||||||
|
clicks,
|
||||||
|
spend,
|
||||||
|
reach,
|
||||||
|
ctr,
|
||||||
|
cpc,
|
||||||
|
cpm,
|
||||||
|
date_preset,
|
||||||
|
date_start,
|
||||||
|
date_stop,
|
||||||
|
fetched_at,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'link_click') AS link_click,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'landing_page_view') AS landing_page_view,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'lead') AS lead
|
||||||
|
FROM adset_insights;
|
||||||
|
|
||||||
|
-- Add indexes for common query patterns
|
||||||
|
|
||||||
|
CREATE INDEX idx_adset_insights_flat_campaign ON adset_insights_flattened(campaign_id);
|
||||||
|
CREATE INDEX idx_adset_insights_flat_date ON adset_insights_flattened(date_start, date_stop);
|
||||||
|
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX idx_adset_insights_flat_unique ON adset_insights_flattened(time, adset_id);
|
||||||
|
REFRESH MATERIALIZED VIEW CONCURRENTLY adset_insights_flattened;
|
||||||
33
src/meta_api_grabber/views/campaign_insights.sql
Normal file
33
src/meta_api_grabber/views/campaign_insights.sql
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
--- campaign insights
|
||||||
|
|
||||||
|
DROP MATERIALIZED VIEW IF EXISTS campaign_insights_flattened CASCADE;
|
||||||
|
|
||||||
|
CREATE MATERIALIZED VIEW campaign_insights_flattened AS
|
||||||
|
SELECT date_start AS "time",
|
||||||
|
concat('act_', account_id) AS account_id,
|
||||||
|
campaign_id,
|
||||||
|
impressions,
|
||||||
|
clicks,
|
||||||
|
spend,
|
||||||
|
reach,
|
||||||
|
ctr,
|
||||||
|
cpc,
|
||||||
|
cpm,
|
||||||
|
date_start,
|
||||||
|
date_stop,
|
||||||
|
( SELECT (jsonb_array_elements.value ->> 'value'::text)::numeric AS "numeric"
|
||||||
|
FROM jsonb_array_elements(customcampaign_insights.actions) jsonb_array_elements(value)
|
||||||
|
WHERE (jsonb_array_elements.value ->> 'action_type'::text) = 'link_click'::text) AS link_click,
|
||||||
|
( SELECT (jsonb_array_elements.value ->> 'value'::text)::numeric AS "numeric"
|
||||||
|
FROM jsonb_array_elements(customcampaign_insights.actions) jsonb_array_elements(value)
|
||||||
|
WHERE (jsonb_array_elements.value ->> 'action_type'::text) = 'landing_page_view'::text) AS landing_page_view,
|
||||||
|
( SELECT (jsonb_array_elements.value ->> 'value'::text)::numeric AS "numeric"
|
||||||
|
FROM jsonb_array_elements(customcampaign_insights.actions) jsonb_array_elements(value)
|
||||||
|
WHERE (jsonb_array_elements.value ->> 'action_type'::text) = 'lead'::text) AS lead
|
||||||
|
FROM meta.customcampaign_insights;
|
||||||
|
|
||||||
|
CREATE INDEX idx_campaign_insights_flat_date ON campaign_insights_flattened(date_start, date_stop);
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX idx_campaign_insights_flat_unique ON campaign_insights_flattened(time, campaign_id);
|
||||||
|
|
||||||
|
REFRESH MATERIALIZED VIEW CONCURRENTLY campaign_insights_flattened;
|
||||||
36
src/meta_api_grabber/views/campaign_insights_by_country.sql
Normal file
36
src/meta_api_grabber/views/campaign_insights_by_country.sql
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
--- campaign insights by country
|
||||||
|
|
||||||
|
DROP MATERIALIZED VIEW IF EXISTS campaign_insights_by_country_flattened CASCADE;
|
||||||
|
|
||||||
|
CREATE MATERIALIZED VIEW campaign_insights_by_country_flattened AS
|
||||||
|
SELECT date_start AS "time",
|
||||||
|
concat('act_', account_id) AS account_id,
|
||||||
|
campaign_id,
|
||||||
|
country,
|
||||||
|
impressions,
|
||||||
|
clicks,
|
||||||
|
spend,
|
||||||
|
reach,
|
||||||
|
frequency,
|
||||||
|
ctr,
|
||||||
|
cpc,
|
||||||
|
cpm,
|
||||||
|
date_start,
|
||||||
|
date_stop,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'link_click') AS link_click,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'landing_page_view') AS landing_page_view,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'lead') AS lead
|
||||||
|
|
||||||
|
FROM meta.custom_campaign_country;
|
||||||
|
|
||||||
|
CREATE INDEX idx_campaign_insights_by_country_flat_date ON campaign_insights_by_country_flattened(date_start, date_stop);
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX idx_campaign_insights_by_country_flat_unique ON campaign_insights_by_country_flattened(time, campaign_id, country);
|
||||||
|
|
||||||
|
REFRESH MATERIALIZED VIEW CONCURRENTLY campaign_insights_by_country_flattened;
|
||||||
32
src/meta_api_grabber/views/campaign_insights_by_device.sql
Normal file
32
src/meta_api_grabber/views/campaign_insights_by_device.sql
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
--- campaign insights by device
|
||||||
|
|
||||||
|
DROP MATERIALIZED VIEW IF EXISTS campaign_insights_by_device_flattened CASCADE;
|
||||||
|
|
||||||
|
CREATE MATERIALIZED VIEW campaign_insights_by_device_flattened AS
|
||||||
|
SELECT date_start AS "time",
|
||||||
|
concat('act_', account_id) AS account_id,
|
||||||
|
campaign_id,
|
||||||
|
device_platform,
|
||||||
|
impressions,
|
||||||
|
clicks,
|
||||||
|
spend,
|
||||||
|
reach,
|
||||||
|
frequency,
|
||||||
|
date_start,
|
||||||
|
date_stop,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'link_click') AS link_click,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'landing_page_view') AS landing_page_view,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'lead') AS lead
|
||||||
|
|
||||||
|
FROM meta.custom_campaign_device;
|
||||||
|
|
||||||
|
CREATE INDEX idx_campaign_insights_by_device_flat_date ON campaign_insights_by_device_flattened(date_start, date_stop);
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX idx_campaign_insights_by_device_flat_unique ON campaign_insights_by_device_flattened(time, campaign_id, device_platform);
|
||||||
|
REFRESH MATERIALIZED VIEW CONCURRENTLY campaign_insights_by_device_flattened;
|
||||||
71
src/meta_api_grabber/views/campaign_insights_by_gender.sql
Normal file
71
src/meta_api_grabber/views/campaign_insights_by_gender.sql
Normal file
@@ -0,0 +1,71 @@
|
|||||||
|
--- campaign insights by country
|
||||||
|
|
||||||
|
DROP MATERIALIZED VIEW IF EXISTS campaign_insights_by_gender_and_age CASCADE;
|
||||||
|
|
||||||
|
CREATE MATERIALIZED VIEW campaign_insights_by_gender_and_age AS
|
||||||
|
SELECT date_start AS "time",
|
||||||
|
concat('act_', account_id) AS account_id,
|
||||||
|
campaign_id,
|
||||||
|
gender,
|
||||||
|
age,
|
||||||
|
impressions,
|
||||||
|
clicks,
|
||||||
|
spend,
|
||||||
|
reach,
|
||||||
|
frequency,
|
||||||
|
ctr,
|
||||||
|
cpc,
|
||||||
|
cpm,
|
||||||
|
date_start,
|
||||||
|
date_stop,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'link_click') AS link_click,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'landing_page_view') AS landing_page_view,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'lead') AS lead
|
||||||
|
|
||||||
|
FROM meta.custom_campaign_gender;
|
||||||
|
|
||||||
|
CREATE INDEX idx_campaign_insights_by_gender_and_age_date ON campaign_insights_by_gender_and_age(date_start, date_stop);
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX idx_campaign_insights_by_gender_and_age_unique ON campaign_insights_by_gender_and_age(time, campaign_id, gender, age);
|
||||||
|
REFRESH MATERIALIZED VIEW CONCURRENTLY campaign_insights_by_gender_and_age;
|
||||||
|
|
||||||
|
|
||||||
|
DROP VIEW IF EXISTS campaign_insights_by_gender CASCADE;
|
||||||
|
|
||||||
|
create view campaign_insights_by_gender as
|
||||||
|
Select time,
|
||||||
|
sum(clicks) as clicks,
|
||||||
|
sum(link_click) as link_click,
|
||||||
|
sum(lead) as lead,
|
||||||
|
sum(landing_page_view) as landing_page_view,
|
||||||
|
sum(spend) as spend,
|
||||||
|
sum(reach) as reach,
|
||||||
|
sum(impressions) as impressions,
|
||||||
|
gender,
|
||||||
|
campaign_id,
|
||||||
|
account_id
|
||||||
|
from campaign_insights_by_gender_and_age
|
||||||
|
group by time, gender, account_id, campaign_id, date_start, date_stop;
|
||||||
|
|
||||||
|
DROP VIEW IF EXISTS campaign_insights_by_age CASCADE;
|
||||||
|
|
||||||
|
create view campaign_insights_by_age as
|
||||||
|
Select time,
|
||||||
|
sum(clicks) as clicks,
|
||||||
|
sum(link_click) as link_click,
|
||||||
|
sum(lead) as lead,
|
||||||
|
sum(landing_page_view) as landing_page_view,
|
||||||
|
sum(spend) as spend,
|
||||||
|
sum(reach) as reach,
|
||||||
|
sum(impressions) as impressions,
|
||||||
|
age,
|
||||||
|
campaign_id,
|
||||||
|
account_id
|
||||||
|
from campaign_insights_by_gender_and_age
|
||||||
|
group by time, age, account_id, campaign_id, date_start, date_stop;
|
||||||
11
src/meta_api_grabber/views/grafana_permissions.sql
Normal file
11
src/meta_api_grabber/views/grafana_permissions.sql
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
-- Grant SELECT permissions for Grafana user on flattened views
|
||||||
|
GRANT SELECT ON account_insights_flattened TO grafana;
|
||||||
|
GRANT SELECT ON campaign_insights_flattened TO grafana;
|
||||||
|
GRANT SELECT ON adset_insights_flattened TO grafana;
|
||||||
|
|
||||||
|
-- Grant SELECT on all existing tables and views in the schema
|
||||||
|
GRANT SELECT ON ALL TABLES IN SCHEMA public TO grafana;
|
||||||
|
|
||||||
|
-- Grant SELECT on all future tables and views in the schema
|
||||||
|
ALTER DEFAULT PRIVILEGES IN SCHEMA public
|
||||||
|
GRANT SELECT ON TABLES TO grafana;
|
||||||
175
test_rate_limiter.py
Normal file
175
test_rate_limiter.py
Normal file
@@ -0,0 +1,175 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Test script for the enhanced Meta API rate limiter.
|
||||||
|
|
||||||
|
This demonstrates the rate limiter's ability to parse and monitor
|
||||||
|
all Meta API rate limit headers.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from typing import Dict, Optional
|
||||||
|
|
||||||
|
from src.meta_api_grabber.rate_limiter import MetaRateLimiter
|
||||||
|
|
||||||
|
|
||||||
|
# Configure logging to show debug messages
|
||||||
|
logging.basicConfig(
|
||||||
|
level=logging.DEBUG,
|
||||||
|
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class MockResponse:
|
||||||
|
"""Mock API response with rate limit headers."""
|
||||||
|
headers: Dict[str, str]
|
||||||
|
|
||||||
|
|
||||||
|
async def test_rate_limiter():
|
||||||
|
"""Test the rate limiter with various header scenarios."""
|
||||||
|
|
||||||
|
# Initialize rate limiter
|
||||||
|
limiter = MetaRateLimiter(
|
||||||
|
base_delay=1.0,
|
||||||
|
throttle_threshold=75.0,
|
||||||
|
max_retry_delay=60.0,
|
||||||
|
)
|
||||||
|
|
||||||
|
print("\n" + "="*70)
|
||||||
|
print("TESTING ENHANCED META API RATE LIMITER")
|
||||||
|
print("="*70 + "\n")
|
||||||
|
|
||||||
|
# Test 1: X-App-Usage header
|
||||||
|
print("\n--- Test 1: X-App-Usage Header ---")
|
||||||
|
response1 = MockResponse(headers={
|
||||||
|
'x-app-usage': json.dumps({
|
||||||
|
'call_count': 45,
|
||||||
|
'total_time': 30,
|
||||||
|
'total_cputime': 35
|
||||||
|
})
|
||||||
|
})
|
||||||
|
limiter.update_usage(response1)
|
||||||
|
limiter.print_stats()
|
||||||
|
|
||||||
|
# Test 2: X-Ad-Account-Usage header (first account)
|
||||||
|
print("\n--- Test 2: X-Ad-Account-Usage Header (Account 1) ---")
|
||||||
|
response2 = MockResponse(headers={
|
||||||
|
'x-ad-account-usage': json.dumps({
|
||||||
|
'acc_id_util_pct': 78.5,
|
||||||
|
'reset_time_duration': 120,
|
||||||
|
'ads_api_access_tier': 'development_access'
|
||||||
|
})
|
||||||
|
})
|
||||||
|
limiter.update_usage(response2, account_id='act_123456789')
|
||||||
|
limiter.print_stats()
|
||||||
|
|
||||||
|
# Test 2b: X-Ad-Account-Usage header (second account)
|
||||||
|
print("\n--- Test 2b: X-Ad-Account-Usage Header (Account 2) ---")
|
||||||
|
response2b = MockResponse(headers={
|
||||||
|
'x-ad-account-usage': json.dumps({
|
||||||
|
'acc_id_util_pct': 45.2,
|
||||||
|
'reset_time_duration': 80,
|
||||||
|
'ads_api_access_tier': 'standard_access'
|
||||||
|
})
|
||||||
|
})
|
||||||
|
limiter.update_usage(response2b, account_id='act_987654321')
|
||||||
|
limiter.print_stats()
|
||||||
|
|
||||||
|
# Test 3: X-Business-Use-Case-Usage header
|
||||||
|
print("\n--- Test 3: X-Business-Use-Case-Usage Header ---")
|
||||||
|
response3 = MockResponse(headers={
|
||||||
|
'x-business-use-case-usage': json.dumps({
|
||||||
|
'66782684': [{
|
||||||
|
'type': 'ads_management',
|
||||||
|
'call_count': 85,
|
||||||
|
'total_cputime': 40,
|
||||||
|
'total_time': 35,
|
||||||
|
'estimated_time_to_regain_access': 5,
|
||||||
|
'ads_api_access_tier': 'development_access'
|
||||||
|
}],
|
||||||
|
'10153848260347724': [{
|
||||||
|
'type': 'ads_insights',
|
||||||
|
'call_count': 92,
|
||||||
|
'total_cputime': 50,
|
||||||
|
'total_time': 45,
|
||||||
|
'estimated_time_to_regain_access': 8,
|
||||||
|
'ads_api_access_tier': 'development_access'
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
})
|
||||||
|
limiter.update_usage(response3)
|
||||||
|
limiter.print_stats()
|
||||||
|
|
||||||
|
# Test 4: Legacy x-fb-ads-insights-throttle header
|
||||||
|
print("\n--- Test 4: Legacy Header ---")
|
||||||
|
response4 = MockResponse(headers={
|
||||||
|
'x-fb-ads-insights-throttle': json.dumps({
|
||||||
|
'app_id_util_pct': 65.0,
|
||||||
|
'acc_id_util_pct': 70.5
|
||||||
|
})
|
||||||
|
})
|
||||||
|
limiter.update_usage(response4)
|
||||||
|
limiter.print_stats()
|
||||||
|
|
||||||
|
# Test 5: All headers combined (high usage scenario)
|
||||||
|
print("\n--- Test 5: High Usage Scenario (All Headers) ---")
|
||||||
|
response5 = MockResponse(headers={
|
||||||
|
'x-app-usage': json.dumps({
|
||||||
|
'call_count': 95,
|
||||||
|
'total_time': 88,
|
||||||
|
'total_cputime': 90
|
||||||
|
}),
|
||||||
|
'x-ad-account-usage': json.dumps({
|
||||||
|
'acc_id_util_pct': 97.5,
|
||||||
|
'reset_time_duration': 300,
|
||||||
|
'ads_api_access_tier': 'standard_access'
|
||||||
|
}),
|
||||||
|
'x-business-use-case-usage': json.dumps({
|
||||||
|
'12345678': [{
|
||||||
|
'type': 'ads_insights',
|
||||||
|
'call_count': 98,
|
||||||
|
'total_cputime': 95,
|
||||||
|
'total_time': 92,
|
||||||
|
'estimated_time_to_regain_access': 15,
|
||||||
|
'ads_api_access_tier': 'standard_access'
|
||||||
|
}]
|
||||||
|
}),
|
||||||
|
'x-fb-ads-insights-throttle': json.dumps({
|
||||||
|
'app_id_util_pct': 93.0,
|
||||||
|
'acc_id_util_pct': 96.0
|
||||||
|
})
|
||||||
|
})
|
||||||
|
limiter.update_usage(response5)
|
||||||
|
limiter.print_stats()
|
||||||
|
|
||||||
|
# Test throttling behavior
|
||||||
|
print("\n--- Test 6: Throttling Behavior ---")
|
||||||
|
print(f"Should throttle: {limiter.should_throttle()}")
|
||||||
|
print(f"Max usage: {limiter.get_max_usage_pct():.1f}%")
|
||||||
|
print(f"Throttle delay: {limiter.get_throttle_delay():.1f}s")
|
||||||
|
print(f"Estimated time to regain access: {limiter.estimated_time_to_regain_access} min")
|
||||||
|
|
||||||
|
# Show per-account reset times
|
||||||
|
if limiter.ad_account_usage:
|
||||||
|
print("Per-account reset times:")
|
||||||
|
for account_id, usage in limiter.ad_account_usage.items():
|
||||||
|
reset_time = usage.get('reset_time_duration', 0)
|
||||||
|
if reset_time > 0:
|
||||||
|
print(f" {account_id}: {reset_time}s")
|
||||||
|
|
||||||
|
# Test 7: Empty/missing headers
|
||||||
|
print("\n--- Test 7: Missing Headers ---")
|
||||||
|
response6 = MockResponse(headers={})
|
||||||
|
limiter.update_usage(response6)
|
||||||
|
limiter.print_stats()
|
||||||
|
|
||||||
|
print("\n" + "="*70)
|
||||||
|
print("ALL TESTS COMPLETED")
|
||||||
|
print("="*70 + "\n")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
asyncio.run(test_rate_limiter())
|
||||||
115
tests/README.md
Normal file
115
tests/README.md
Normal file
@@ -0,0 +1,115 @@
|
|||||||
|
# Tests
|
||||||
|
|
||||||
|
This directory contains tests for the meta_api_grabber project.
|
||||||
|
|
||||||
|
## Running Tests
|
||||||
|
|
||||||
|
Install test dependencies:
|
||||||
|
```bash
|
||||||
|
uv sync --extra test
|
||||||
|
```
|
||||||
|
|
||||||
|
Run all tests:
|
||||||
|
```bash
|
||||||
|
uv run pytest
|
||||||
|
```
|
||||||
|
|
||||||
|
Run specific test file:
|
||||||
|
```bash
|
||||||
|
uv run pytest tests/test_field_schema_validation.py -v
|
||||||
|
```
|
||||||
|
|
||||||
|
Run with verbose output:
|
||||||
|
```bash
|
||||||
|
uv run pytest tests/test_field_schema_validation.py -v -s
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Files
|
||||||
|
|
||||||
|
### `test_field_schema_validation.py` (Integration Test)
|
||||||
|
|
||||||
|
This is a critical integration test that validates all fields requested by the grab_* methods in `scheduled_grabber.py` exist in the actual database schema.
|
||||||
|
|
||||||
|
**What it does:**
|
||||||
|
1. Parses `db_schema.sql` to extract actual table columns
|
||||||
|
2. Checks fields requested by each grab method:
|
||||||
|
- `grab_account_insights()` → `account_insights` table
|
||||||
|
- `grab_campaign_insights()` → `campaign_insights` table
|
||||||
|
- `grab_adset_insights()` → `adset_insights` table
|
||||||
|
- `grab_campaign_insights_by_country()` → `campaign_insights_by_country` table
|
||||||
|
3. Verifies all requested fields exist in the corresponding database table
|
||||||
|
|
||||||
|
**Why this test is important:** When new fields are added to the Meta API field lists, this test quickly alerts you if the corresponding database columns need to be added. Since fields are only added (never removed), the test helps catch schema mismatches early.
|
||||||
|
|
||||||
|
**Test methods:**
|
||||||
|
- `test_account_insights_fields()` - Validates account-level insight fields
|
||||||
|
- `test_campaign_insights_fields()` - Validates campaign-level insight fields
|
||||||
|
- `test_adset_insights_fields()` - Validates ad set-level insight fields
|
||||||
|
- `test_campaign_insights_by_country_fields()` - Validates country breakdown fields
|
||||||
|
- `test_all_tables_exist()` - Ensures all required insight tables exist
|
||||||
|
- `test_schema_documentation()` - Prints out the parsed schema for reference
|
||||||
|
|
||||||
|
**Output example:**
|
||||||
|
```
|
||||||
|
Table: account_insights
|
||||||
|
Columns (17): account_id, actions, clicks, cost_per_action_type, cpc, cpm, cpp, ctr, ...
|
||||||
|
|
||||||
|
Table: campaign_insights
|
||||||
|
Columns (15): account_id, actions, campaign_id, clicks, cpc, cpm, ctr, ...
|
||||||
|
```
|
||||||
|
|
||||||
|
## Writing Tests
|
||||||
|
|
||||||
|
Use markers to categorize tests:
|
||||||
|
```python
|
||||||
|
@pytest.mark.unit
|
||||||
|
def test_something():
|
||||||
|
pass
|
||||||
|
|
||||||
|
@pytest.mark.integration
|
||||||
|
async def test_database_connection():
|
||||||
|
pass
|
||||||
|
```
|
||||||
|
|
||||||
|
Run only unit tests:
|
||||||
|
```bash
|
||||||
|
uv run pytest -m unit
|
||||||
|
```
|
||||||
|
|
||||||
|
Run everything except integration tests:
|
||||||
|
```bash
|
||||||
|
uv run pytest -m "not integration"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Schema Validation Workflow
|
||||||
|
|
||||||
|
When you add new fields to a grab method:
|
||||||
|
|
||||||
|
1. **Add fields to `scheduled_grabber.py`:**
|
||||||
|
```python
|
||||||
|
fields = [
|
||||||
|
...
|
||||||
|
AdsInsights.Field.new_field, # New field added
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Run tests to see what's missing:**
|
||||||
|
```bash
|
||||||
|
uv run pytest tests/test_field_schema_validation.py -v -s
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Test output will show:**
|
||||||
|
```
|
||||||
|
adset_insights table missing columns: {'new_field'}
|
||||||
|
Available: [account_id, actions, adset_id, ...]
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Update `db_schema.sql` with the new column:**
|
||||||
|
```sql
|
||||||
|
ALTER TABLE adset_insights ADD COLUMN IF NOT EXISTS new_field TYPE;
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Run tests again to verify:**
|
||||||
|
```bash
|
||||||
|
uv run pytest tests/test_field_schema_validation.py -v
|
||||||
|
```
|
||||||
1
tests/__init__.py
Normal file
1
tests/__init__.py
Normal file
@@ -0,0 +1 @@
|
|||||||
|
"""Tests for meta_api_grabber package."""
|
||||||
13
tests/conftest.py
Normal file
13
tests/conftest.py
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
"""Pytest configuration and fixtures."""
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
|
||||||
|
def pytest_configure(config):
|
||||||
|
"""Configure pytest."""
|
||||||
|
config.addinivalue_line(
|
||||||
|
"markers", "integration: marks tests as integration tests (deselect with '-m \"not integration\"')"
|
||||||
|
)
|
||||||
|
config.addinivalue_line(
|
||||||
|
"markers", "unit: marks tests as unit tests"
|
||||||
|
)
|
||||||
360
tests/test_field_schema_validation.py
Normal file
360
tests/test_field_schema_validation.py
Normal file
@@ -0,0 +1,360 @@
|
|||||||
|
"""
|
||||||
|
Integration test that validates all fields requested by grab_* methods exist in the database schema.
|
||||||
|
|
||||||
|
This test:
|
||||||
|
1. Parses the SQL schema file (db_schema.sql) to extract actual table columns
|
||||||
|
2. Reads scheduled_grabber.py to find which methods call which tables
|
||||||
|
3. Verifies that all requested fields exist in the actual database schema
|
||||||
|
"""
|
||||||
|
|
||||||
|
import re
|
||||||
|
import pathlib
|
||||||
|
from typing import Dict, Set, List
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
|
||||||
|
def parse_sql_schema() -> Dict[str, Set[str]]:
|
||||||
|
"""
|
||||||
|
Parse db_schema.sql to extract table columns.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary mapping table names to sets of column names
|
||||||
|
"""
|
||||||
|
schema_file = pathlib.Path(__file__).parent.parent / "src" / "meta_api_grabber" / "db_schema.sql"
|
||||||
|
|
||||||
|
if not schema_file.exists():
|
||||||
|
raise FileNotFoundError(f"Schema file not found: {schema_file}")
|
||||||
|
|
||||||
|
with open(schema_file, 'r') as f:
|
||||||
|
content = f.read()
|
||||||
|
|
||||||
|
tables = {}
|
||||||
|
|
||||||
|
# Parse CREATE TABLE statements
|
||||||
|
# Pattern: CREATE TABLE IF NOT EXISTS table_name (...)
|
||||||
|
create_table_pattern = r'CREATE TABLE IF NOT EXISTS (\w+)\s*\((.*?)\);'
|
||||||
|
|
||||||
|
for match in re.finditer(create_table_pattern, content, re.DOTALL):
|
||||||
|
table_name = match.group(1)
|
||||||
|
table_body = match.group(2)
|
||||||
|
|
||||||
|
# Extract column names (first word before space/comma)
|
||||||
|
# Pattern: column_name TYPE ...
|
||||||
|
column_pattern = r'^\s*(\w+)\s+\w+'
|
||||||
|
columns = set()
|
||||||
|
|
||||||
|
for line in table_body.split('\n'):
|
||||||
|
line = line.strip()
|
||||||
|
if not line or line.startswith('--') or line.startswith('PRIMARY') or line.startswith('FOREIGN') or line.startswith('CONSTRAINT'):
|
||||||
|
continue
|
||||||
|
|
||||||
|
col_match = re.match(column_pattern, line)
|
||||||
|
if col_match:
|
||||||
|
columns.add(col_match.group(1))
|
||||||
|
|
||||||
|
if columns:
|
||||||
|
tables[table_name] = columns
|
||||||
|
|
||||||
|
return tables
|
||||||
|
|
||||||
|
|
||||||
|
def get_field_name(field_str: str) -> str:
|
||||||
|
"""
|
||||||
|
Extract field name from AdsInsights.Field.xxx notation.
|
||||||
|
|
||||||
|
Example: 'impressions' from 'AdsInsights.Field.impressions'
|
||||||
|
"""
|
||||||
|
if '.' in field_str:
|
||||||
|
return field_str.split('.')[-1]
|
||||||
|
return field_str
|
||||||
|
|
||||||
|
|
||||||
|
def extract_fields_from_grabber_source() -> Dict[str, List[str]]:
|
||||||
|
"""
|
||||||
|
Extract field lists from grab_* methods by reading scheduled_grabber.py source.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary mapping method names to lists of field names
|
||||||
|
"""
|
||||||
|
grabber_file = pathlib.Path(__file__).parent.parent / "src" / "meta_api_grabber" / "scheduled_grabber.py"
|
||||||
|
|
||||||
|
if not grabber_file.exists():
|
||||||
|
raise FileNotFoundError(f"scheduled_grabber.py not found: {grabber_file}")
|
||||||
|
|
||||||
|
with open(grabber_file, 'r') as f:
|
||||||
|
source = f.read()
|
||||||
|
|
||||||
|
methods_to_table = {
|
||||||
|
'grab_account_insights': 'account_insights',
|
||||||
|
'grab_campaign_insights': 'campaign_insights',
|
||||||
|
'grab_adset_insights': 'adset_insights',
|
||||||
|
'grab_campaign_insights_by_country': 'campaign_insights_by_country',
|
||||||
|
}
|
||||||
|
|
||||||
|
result = {}
|
||||||
|
|
||||||
|
for method_name in methods_to_table.keys():
|
||||||
|
# Find the method definition by looking for: async def method_name(...)
|
||||||
|
method_pattern = rf'async def {method_name}\s*\('
|
||||||
|
method_match = re.search(method_pattern, source)
|
||||||
|
|
||||||
|
if not method_match:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Get the position after the method name pattern
|
||||||
|
start_pos = method_match.end()
|
||||||
|
|
||||||
|
# Now find where the method body actually starts (after the closing paren and docstring)
|
||||||
|
# Skip to the opening paren
|
||||||
|
open_paren_pos = start_pos - 1
|
||||||
|
|
||||||
|
# Count parentheses to find the closing paren of the function signature
|
||||||
|
paren_count = 1
|
||||||
|
pos = open_paren_pos + 1
|
||||||
|
while pos < len(source) and paren_count > 0:
|
||||||
|
if source[pos] == '(':
|
||||||
|
paren_count += 1
|
||||||
|
elif source[pos] == ')':
|
||||||
|
paren_count -= 1
|
||||||
|
pos += 1
|
||||||
|
|
||||||
|
# Now pos is after the closing paren. Find the colon
|
||||||
|
colon_pos = source.find(':', pos)
|
||||||
|
|
||||||
|
# Skip past any docstring if present
|
||||||
|
after_colon = source[colon_pos + 1:colon_pos + 10].lstrip()
|
||||||
|
if after_colon.startswith('"""') or after_colon.startswith("'''"):
|
||||||
|
quote_type = '"""' if after_colon.startswith('"""') else "'''"
|
||||||
|
docstring_start = source.find(quote_type, colon_pos)
|
||||||
|
docstring_end = source.find(quote_type, docstring_start + 3) + 3
|
||||||
|
method_body_start = docstring_end
|
||||||
|
else:
|
||||||
|
method_body_start = colon_pos + 1
|
||||||
|
|
||||||
|
# Find the next method definition to know where this method ends
|
||||||
|
next_method_pattern = r'async def \w+\s*\('
|
||||||
|
next_match = re.search(next_method_pattern, source[method_body_start:])
|
||||||
|
|
||||||
|
if next_match:
|
||||||
|
method_body_end = method_body_start + next_match.start()
|
||||||
|
else:
|
||||||
|
# Last method - use rest of file
|
||||||
|
method_body_end = len(source)
|
||||||
|
|
||||||
|
method_body = source[method_body_start:method_body_end]
|
||||||
|
|
||||||
|
# Extract fields from the method body
|
||||||
|
# Look for: fields = [...] or fields = common_fields + [...]
|
||||||
|
|
||||||
|
# First check if this method uses common_fields
|
||||||
|
uses_common_fields = 'common_fields' in method_body[:500]
|
||||||
|
|
||||||
|
if uses_common_fields:
|
||||||
|
# Pattern: fields = common_fields + [...]
|
||||||
|
fields_pattern = r'fields\s*=\s*common_fields\s*\+\s*\[(.*?)\]'
|
||||||
|
fields_match = re.search(fields_pattern, method_body, re.DOTALL)
|
||||||
|
if fields_match:
|
||||||
|
fields_str = fields_match.group(1)
|
||||||
|
# Extract individual field names
|
||||||
|
field_pattern = r'AdsInsights\.Field\.(\w+)'
|
||||||
|
fields = re.findall(field_pattern, fields_str)
|
||||||
|
|
||||||
|
# Also get common_fields from the module level
|
||||||
|
common_pattern = r'common_fields\s*=\s*\[(.*?)\]'
|
||||||
|
common_match = re.search(common_pattern, source, re.DOTALL)
|
||||||
|
if common_match:
|
||||||
|
common_str = common_match.group(1)
|
||||||
|
common_fields_list = re.findall(field_pattern, common_str)
|
||||||
|
fields = common_fields_list + fields
|
||||||
|
|
||||||
|
result[method_name] = fields
|
||||||
|
else:
|
||||||
|
# Pattern: fields = [...]
|
||||||
|
# Use bracket matching to find the correct field list
|
||||||
|
fields_keyword_pos = method_body.find('fields =')
|
||||||
|
|
||||||
|
if fields_keyword_pos != -1:
|
||||||
|
# Find the opening bracket after fields =
|
||||||
|
bracket_pos = method_body.find('[', fields_keyword_pos)
|
||||||
|
if bracket_pos != -1:
|
||||||
|
# Count brackets to find the matching closing bracket
|
||||||
|
bracket_count = 0
|
||||||
|
end_pos = bracket_pos
|
||||||
|
for i, char in enumerate(method_body[bracket_pos:]):
|
||||||
|
if char == '[':
|
||||||
|
bracket_count += 1
|
||||||
|
elif char == ']':
|
||||||
|
bracket_count -= 1
|
||||||
|
if bracket_count == 0:
|
||||||
|
end_pos = bracket_pos + i
|
||||||
|
break
|
||||||
|
|
||||||
|
fields_str = method_body[bracket_pos + 1:end_pos]
|
||||||
|
field_pattern = r'AdsInsights\.Field\.(\w+)'
|
||||||
|
fields = re.findall(field_pattern, fields_str)
|
||||||
|
result[method_name] = fields
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="module")
|
||||||
|
def schema_columns():
|
||||||
|
"""Parse and cache the schema columns."""
|
||||||
|
return parse_sql_schema()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="module")
|
||||||
|
def extracted_fields_by_method():
|
||||||
|
"""Extract and cache the fields from each grab_* method."""
|
||||||
|
return extract_fields_from_grabber_source()
|
||||||
|
|
||||||
|
|
||||||
|
# Mapping of method names to their insight table names
|
||||||
|
METHOD_TO_TABLE = {
|
||||||
|
'grab_account_insights': 'account_insights',
|
||||||
|
'grab_campaign_insights': 'campaign_insights',
|
||||||
|
'grab_adset_insights': 'adset_insights',
|
||||||
|
'grab_campaign_insights_by_country': 'campaign_insights_by_country',
|
||||||
|
}
|
||||||
|
|
||||||
|
# Fields that are IDs/names stored in metadata tables, not in the insights table
|
||||||
|
METADATA_ONLY_FIELDS = {
|
||||||
|
'campaign_id', 'campaign_name',
|
||||||
|
'adset_id', 'adset_name',
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class TestFieldSchemaValidation:
|
||||||
|
"""Validate that all API field requests have corresponding database columns."""
|
||||||
|
|
||||||
|
def test_grab_account_insights_fields(self, schema_columns, extracted_fields_by_method):
|
||||||
|
"""Test that grab_account_insights fields exist in schema."""
|
||||||
|
method_name = 'grab_account_insights'
|
||||||
|
table_name = METHOD_TO_TABLE[method_name]
|
||||||
|
|
||||||
|
assert method_name in extracted_fields_by_method, f"Could not extract fields from {method_name}"
|
||||||
|
|
||||||
|
extracted_fields = set(extracted_fields_by_method[method_name])
|
||||||
|
table_cols = schema_columns.get(table_name, set())
|
||||||
|
assert table_cols, f"Table {table_name} not found in schema"
|
||||||
|
|
||||||
|
missing = extracted_fields - table_cols
|
||||||
|
assert not missing, \
|
||||||
|
f"{table_name} table missing columns: {missing}\n" \
|
||||||
|
f"Method requests: {sorted(extracted_fields)}\n" \
|
||||||
|
f"Available: {sorted(table_cols)}"
|
||||||
|
|
||||||
|
print(f"✓ {method_name} → {table_name}: {len(extracted_fields)} fields validated")
|
||||||
|
|
||||||
|
def test_grab_campaign_insights_fields(self, schema_columns, extracted_fields_by_method):
|
||||||
|
"""Test that grab_campaign_insights fields exist in schema."""
|
||||||
|
method_name = 'grab_campaign_insights'
|
||||||
|
table_name = METHOD_TO_TABLE[method_name]
|
||||||
|
|
||||||
|
assert method_name in extracted_fields_by_method, f"Could not extract fields from {method_name}"
|
||||||
|
|
||||||
|
extracted_fields = set(extracted_fields_by_method[method_name])
|
||||||
|
table_cols = schema_columns.get(table_name, set())
|
||||||
|
assert table_cols, f"Table {table_name} not found in schema"
|
||||||
|
|
||||||
|
# Remove ID/name fields (stored in metadata tables, not insights table)
|
||||||
|
insight_only_fields = extracted_fields - METADATA_ONLY_FIELDS
|
||||||
|
|
||||||
|
missing = insight_only_fields - table_cols
|
||||||
|
assert not missing, \
|
||||||
|
f"{table_name} table missing columns: {missing}\n" \
|
||||||
|
f"Method requests: {sorted(extracted_fields)}\n" \
|
||||||
|
f"Available: {sorted(table_cols)}"
|
||||||
|
|
||||||
|
print(f"✓ {method_name} → {table_name}: {len(extracted_fields)} fields validated")
|
||||||
|
|
||||||
|
def test_grab_adset_insights_fields(self, schema_columns, extracted_fields_by_method):
|
||||||
|
"""Test that grab_adset_insights fields exist in schema."""
|
||||||
|
method_name = 'grab_adset_insights'
|
||||||
|
table_name = METHOD_TO_TABLE[method_name]
|
||||||
|
|
||||||
|
assert method_name in extracted_fields_by_method, f"Could not extract fields from {method_name}"
|
||||||
|
|
||||||
|
extracted_fields = set(extracted_fields_by_method[method_name])
|
||||||
|
table_cols = schema_columns.get(table_name, set())
|
||||||
|
assert table_cols, f"Table {table_name} not found in schema"
|
||||||
|
|
||||||
|
# Remove ID/name fields (stored in metadata tables, not insights table)
|
||||||
|
insight_only_fields = extracted_fields - METADATA_ONLY_FIELDS
|
||||||
|
|
||||||
|
missing = insight_only_fields - table_cols
|
||||||
|
assert not missing, \
|
||||||
|
f"{table_name} table missing columns: {missing}\n" \
|
||||||
|
f"Method requests: {sorted(extracted_fields)}\n" \
|
||||||
|
f"Available: {sorted(table_cols)}"
|
||||||
|
|
||||||
|
print(f"✓ {method_name} → {table_name}: {len(extracted_fields)} fields validated")
|
||||||
|
|
||||||
|
def test_grab_campaign_insights_by_country_fields(self, schema_columns, extracted_fields_by_method):
|
||||||
|
"""Test that grab_campaign_insights_by_country fields exist in schema."""
|
||||||
|
method_name = 'grab_campaign_insights_by_country'
|
||||||
|
table_name = METHOD_TO_TABLE[method_name]
|
||||||
|
|
||||||
|
assert method_name in extracted_fields_by_method, f"Could not extract fields from {method_name}"
|
||||||
|
|
||||||
|
extracted_fields = set(extracted_fields_by_method[method_name])
|
||||||
|
table_cols = schema_columns.get(table_name, set())
|
||||||
|
assert table_cols, f"Table {table_name} not found in schema"
|
||||||
|
|
||||||
|
# Remove ID/name fields (stored in metadata tables, not insights table)
|
||||||
|
insight_only_fields = extracted_fields - METADATA_ONLY_FIELDS
|
||||||
|
|
||||||
|
# Country is special - it's part of the breakdown
|
||||||
|
assert "country" in table_cols, \
|
||||||
|
f"country field missing in {table_name} table\n" \
|
||||||
|
f"Available: {sorted(table_cols)}"
|
||||||
|
|
||||||
|
missing = insight_only_fields - table_cols
|
||||||
|
assert not missing, \
|
||||||
|
f"{table_name} table missing columns: {missing}\n" \
|
||||||
|
f"Method requests: {sorted(extracted_fields)}\n" \
|
||||||
|
f"Available: {sorted(table_cols)}"
|
||||||
|
|
||||||
|
print(f"✓ {method_name} → {table_name}: {len(extracted_fields)} fields validated")
|
||||||
|
|
||||||
|
def test_all_tables_exist(self, schema_columns):
|
||||||
|
"""Test that all required insight tables exist in schema."""
|
||||||
|
required_tables = {
|
||||||
|
"account_insights",
|
||||||
|
"campaign_insights",
|
||||||
|
"adset_insights",
|
||||||
|
"campaign_insights_by_country",
|
||||||
|
}
|
||||||
|
|
||||||
|
existing_tables = set(schema_columns.keys())
|
||||||
|
missing = required_tables - existing_tables
|
||||||
|
|
||||||
|
assert not missing, \
|
||||||
|
f"Missing tables: {missing}\n" \
|
||||||
|
f"Found: {sorted(existing_tables)}"
|
||||||
|
|
||||||
|
def test_schema_documentation(self, schema_columns):
|
||||||
|
"""Print out the parsed schema for verification."""
|
||||||
|
print("\n" + "="*80)
|
||||||
|
print("PARSED DATABASE SCHEMA")
|
||||||
|
print("="*80)
|
||||||
|
|
||||||
|
for table_name in sorted(schema_columns.keys()):
|
||||||
|
columns = sorted(schema_columns[table_name])
|
||||||
|
print(f"\nTable: {table_name}")
|
||||||
|
print(f"Columns ({len(columns)}): {', '.join(columns)}")
|
||||||
|
|
||||||
|
def test_extracted_fields_documentation(self, extracted_fields_by_method):
|
||||||
|
"""Print out extracted fields from each method."""
|
||||||
|
print("\n" + "="*80)
|
||||||
|
print("EXTRACTED FIELDS FROM GRAB METHODS")
|
||||||
|
print("="*80)
|
||||||
|
|
||||||
|
for method_name, fields in sorted(extracted_fields_by_method.items()):
|
||||||
|
print(f"\n{method_name}:")
|
||||||
|
print(f" Fields ({len(fields)}): {', '.join(sorted(set(fields)))}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
pytest.main([__file__, "-v"])
|
||||||
316
uv.lock
generated
316
uv.lock
generated
@@ -1,6 +1,10 @@
|
|||||||
version = 1
|
version = 1
|
||||||
revision = 3
|
revision = 3
|
||||||
requires-python = ">=3.13"
|
requires-python = ">=3.13"
|
||||||
|
resolution-markers = [
|
||||||
|
"python_full_version >= '3.14'",
|
||||||
|
"python_full_version < '3.14'",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "aiohappyeyeballs"
|
name = "aiohappyeyeballs"
|
||||||
@@ -130,6 +134,15 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/3a/2a/7cc015f5b9f5db42b7d48157e23356022889fc354a2813c15934b7cb5c0e/attrs-25.4.0-py3-none-any.whl", hash = "sha256:adcf7e2a1fb3b36ac48d97835bb6d8ade15b8dcce26aba8bf1d14847b57a3373", size = 67615, upload-time = "2025-10-06T13:54:43.17Z" },
|
{ url = "https://files.pythonhosted.org/packages/3a/2a/7cc015f5b9f5db42b7d48157e23356022889fc354a2813c15934b7cb5c0e/attrs-25.4.0-py3-none-any.whl", hash = "sha256:adcf7e2a1fb3b36ac48d97835bb6d8ade15b8dcce26aba8bf1d14847b57a3373", size = 67615, upload-time = "2025-10-06T13:54:43.17Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "cachetools"
|
||||||
|
version = "6.2.1"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/cc/7e/b975b5814bd36faf009faebe22c1072a1fa1168db34d285ef0ba071ad78c/cachetools-6.2.1.tar.gz", hash = "sha256:3f391e4bd8f8bf0931169baf7456cc822705f4e2a31f840d218f445b9a854201", size = 31325, upload-time = "2025-10-12T14:55:30.139Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/96/c5/1e741d26306c42e2bf6ab740b2202872727e0f606033c9dd713f8b93f5a8/cachetools-6.2.1-py3-none-any.whl", hash = "sha256:09868944b6dde876dfd44e1d47e18484541eaf12f26f29b7af91b26cc892d701", size = 11280, upload-time = "2025-10-12T14:55:28.382Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "certifi"
|
name = "certifi"
|
||||||
version = "2025.10.5"
|
version = "2025.10.5"
|
||||||
@@ -180,6 +193,15 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/0a/4c/925909008ed5a988ccbb72dcc897407e5d6d3bd72410d69e051fc0c14647/charset_normalizer-3.4.4-py3-none-any.whl", hash = "sha256:7a32c560861a02ff789ad905a2fe94e3f840803362c84fecf1851cb4cf3dc37f", size = 53402, upload-time = "2025-10-14T04:42:31.76Z" },
|
{ url = "https://files.pythonhosted.org/packages/0a/4c/925909008ed5a988ccbb72dcc897407e5d6d3bd72410d69e051fc0c14647/charset_normalizer-3.4.4-py3-none-any.whl", hash = "sha256:7a32c560861a02ff789ad905a2fe94e3f840803362c84fecf1851cb4cf3dc37f", size = 53402, upload-time = "2025-10-14T04:42:31.76Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "colorama"
|
||||||
|
version = "0.4.6"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697, upload-time = "2022-10-25T02:36:22.414Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335, upload-time = "2022-10-25T02:36:20.889Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "curlify"
|
name = "curlify"
|
||||||
version = "3.0.0"
|
version = "3.0.0"
|
||||||
@@ -281,6 +303,80 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/9a/9a/e35b4a917281c0b8419d4207f4334c8e8c5dbf4f3f5f9ada73958d937dcc/frozenlist-1.8.0-py3-none-any.whl", hash = "sha256:0c18a16eab41e82c295618a77502e17b195883241c563b00f0aa5106fc4eaa0d", size = 13409, upload-time = "2025-10-06T05:38:16.721Z" },
|
{ url = "https://files.pythonhosted.org/packages/9a/9a/e35b4a917281c0b8419d4207f4334c8e8c5dbf4f3f5f9ada73958d937dcc/frozenlist-1.8.0-py3-none-any.whl", hash = "sha256:0c18a16eab41e82c295618a77502e17b195883241c563b00f0aa5106fc4eaa0d", size = 13409, upload-time = "2025-10-06T05:38:16.721Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "google-ads"
|
||||||
|
version = "28.3.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "google-api-core" },
|
||||||
|
{ name = "google-auth-oauthlib" },
|
||||||
|
{ name = "googleapis-common-protos" },
|
||||||
|
{ name = "grpcio" },
|
||||||
|
{ name = "grpcio-status" },
|
||||||
|
{ name = "proto-plus" },
|
||||||
|
{ name = "protobuf" },
|
||||||
|
{ name = "pyyaml" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/a7/ee/30bf06a8334333a43805050c7530637b7c308f371945e3cad7d78b4c5287/google_ads-28.3.0.tar.gz", hash = "sha256:d544e7e3792974e9dc6a016e0eb264f9218526be698c8c6b8a438717a6dcc95b", size = 9222858, upload-time = "2025-10-22T16:22:43.726Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/ca/cf/21541e673e47582ac46b164817ff359370ed9897db977225587f5290b202/google_ads-28.3.0-py3-none-any.whl", hash = "sha256:11ec6227784a565de3ad3f0047ac82eb13c6bfca1d4a5862df9b3c63162fbb40", size = 17781520, upload-time = "2025-10-22T16:22:40.881Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "google-api-core"
|
||||||
|
version = "2.28.1"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "google-auth" },
|
||||||
|
{ name = "googleapis-common-protos" },
|
||||||
|
{ name = "proto-plus" },
|
||||||
|
{ name = "protobuf" },
|
||||||
|
{ name = "requests" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/61/da/83d7043169ac2c8c7469f0e375610d78ae2160134bf1b80634c482fa079c/google_api_core-2.28.1.tar.gz", hash = "sha256:2b405df02d68e68ce0fbc138559e6036559e685159d148ae5861013dc201baf8", size = 176759, upload-time = "2025-10-28T21:34:51.529Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/ed/d4/90197b416cb61cefd316964fd9e7bd8324bcbafabf40eef14a9f20b81974/google_api_core-2.28.1-py3-none-any.whl", hash = "sha256:4021b0f8ceb77a6fb4de6fde4502cecab45062e66ff4f2895169e0b35bc9466c", size = 173706, upload-time = "2025-10-28T21:34:50.151Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "google-auth"
|
||||||
|
version = "2.42.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "cachetools" },
|
||||||
|
{ name = "pyasn1-modules" },
|
||||||
|
{ name = "rsa" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/11/75/28881e9d7de9b3d61939bc9624bd8fa594eb787a00567aba87173c790f09/google_auth-2.42.0.tar.gz", hash = "sha256:9bbbeef3442586effb124d1ca032cfb8fb7acd8754ab79b55facd2b8f3ab2802", size = 295400, upload-time = "2025-10-28T17:38:08.599Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/87/24/ec82aee6ba1a076288818fe5cc5125f4d93fffdc68bb7b381c68286c8aaa/google_auth-2.42.0-py2.py3-none-any.whl", hash = "sha256:f8f944bcb9723339b0ef58a73840f3c61bc91b69bf7368464906120b55804473", size = 222550, upload-time = "2025-10-28T17:38:05.496Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "google-auth-oauthlib"
|
||||||
|
version = "1.2.2"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "google-auth" },
|
||||||
|
{ name = "requests-oauthlib" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/fb/87/e10bf24f7bcffc1421b84d6f9c3377c30ec305d082cd737ddaa6d8f77f7c/google_auth_oauthlib-1.2.2.tar.gz", hash = "sha256:11046fb8d3348b296302dd939ace8af0a724042e8029c1b872d87fabc9f41684", size = 20955, upload-time = "2025-04-22T16:40:29.172Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/ac/84/40ee070be95771acd2f4418981edb834979424565c3eec3cd88b6aa09d24/google_auth_oauthlib-1.2.2-py3-none-any.whl", hash = "sha256:fd619506f4b3908b5df17b65f39ca8d66ea56986e5472eb5978fd8f3786f00a2", size = 19072, upload-time = "2025-04-22T16:40:28.174Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "googleapis-common-protos"
|
||||||
|
version = "1.71.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "protobuf" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/30/43/b25abe02db2911397819003029bef768f68a974f2ece483e6084d1a5f754/googleapis_common_protos-1.71.0.tar.gz", hash = "sha256:1aec01e574e29da63c80ba9f7bbf1ccfaacf1da877f23609fe236ca7c72a2e2e", size = 146454, upload-time = "2025-10-20T14:58:08.732Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/25/e8/eba9fece11d57a71e3e22ea672742c8f3cf23b35730c9e96db768b295216/googleapis_common_protos-1.71.0-py3-none-any.whl", hash = "sha256:59034a1d849dc4d18971997a72ac56246570afdd17f9369a0ff68218d50ab78c", size = 294576, upload-time = "2025-10-20T14:56:21.295Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "greenlet"
|
name = "greenlet"
|
||||||
version = "3.2.4"
|
version = "3.2.4"
|
||||||
@@ -295,6 +391,8 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/ee/43/3cecdc0349359e1a527cbf2e3e28e5f8f06d3343aaf82ca13437a9aa290f/greenlet-3.2.4-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:23768528f2911bcd7e475210822ffb5254ed10d71f4028387e5a99b4c6699671", size = 610497, upload-time = "2025-08-07T13:18:31.636Z" },
|
{ url = "https://files.pythonhosted.org/packages/ee/43/3cecdc0349359e1a527cbf2e3e28e5f8f06d3343aaf82ca13437a9aa290f/greenlet-3.2.4-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:23768528f2911bcd7e475210822ffb5254ed10d71f4028387e5a99b4c6699671", size = 610497, upload-time = "2025-08-07T13:18:31.636Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/b8/19/06b6cf5d604e2c382a6f31cafafd6f33d5dea706f4db7bdab184bad2b21d/greenlet-3.2.4-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:00fadb3fedccc447f517ee0d3fd8fe49eae949e1cd0f6a611818f4f6fb7dc83b", size = 1121662, upload-time = "2025-08-07T13:42:41.117Z" },
|
{ url = "https://files.pythonhosted.org/packages/b8/19/06b6cf5d604e2c382a6f31cafafd6f33d5dea706f4db7bdab184bad2b21d/greenlet-3.2.4-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:00fadb3fedccc447f517ee0d3fd8fe49eae949e1cd0f6a611818f4f6fb7dc83b", size = 1121662, upload-time = "2025-08-07T13:42:41.117Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/a2/15/0d5e4e1a66fab130d98168fe984c509249c833c1a3c16806b90f253ce7b9/greenlet-3.2.4-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:d25c5091190f2dc0eaa3f950252122edbbadbb682aa7b1ef2f8af0f8c0afefae", size = 1149210, upload-time = "2025-08-07T13:18:24.072Z" },
|
{ url = "https://files.pythonhosted.org/packages/a2/15/0d5e4e1a66fab130d98168fe984c509249c833c1a3c16806b90f253ce7b9/greenlet-3.2.4-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:d25c5091190f2dc0eaa3f950252122edbbadbb682aa7b1ef2f8af0f8c0afefae", size = 1149210, upload-time = "2025-08-07T13:18:24.072Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/1c/53/f9c440463b3057485b8594d7a638bed53ba531165ef0ca0e6c364b5cc807/greenlet-3.2.4-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:6e343822feb58ac4d0a1211bd9399de2b3a04963ddeec21530fc426cc121f19b", size = 1564759, upload-time = "2025-11-04T12:42:19.395Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/47/e4/3bb4240abdd0a8d23f4f88adec746a3099f0d86bfedb623f063b2e3b4df0/greenlet-3.2.4-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:ca7f6f1f2649b89ce02f6f229d7c19f680a6238af656f61e0115b24857917929", size = 1634288, upload-time = "2025-11-04T12:42:21.174Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/0b/55/2321e43595e6801e105fcfdee02b34c0f996eb71e6ddffca6b10b7e1d771/greenlet-3.2.4-cp313-cp313-win_amd64.whl", hash = "sha256:554b03b6e73aaabec3745364d6239e9e012d64c68ccd0b8430c64ccc14939a8b", size = 299685, upload-time = "2025-08-07T13:24:38.824Z" },
|
{ url = "https://files.pythonhosted.org/packages/0b/55/2321e43595e6801e105fcfdee02b34c0f996eb71e6ddffca6b10b7e1d771/greenlet-3.2.4-cp313-cp313-win_amd64.whl", hash = "sha256:554b03b6e73aaabec3745364d6239e9e012d64c68ccd0b8430c64ccc14939a8b", size = 299685, upload-time = "2025-08-07T13:24:38.824Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/22/5c/85273fd7cc388285632b0498dbbab97596e04b154933dfe0f3e68156c68c/greenlet-3.2.4-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:49a30d5fda2507ae77be16479bdb62a660fa51b1eb4928b524975b3bde77b3c0", size = 273586, upload-time = "2025-08-07T13:16:08.004Z" },
|
{ url = "https://files.pythonhosted.org/packages/22/5c/85273fd7cc388285632b0498dbbab97596e04b154933dfe0f3e68156c68c/greenlet-3.2.4-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:49a30d5fda2507ae77be16479bdb62a660fa51b1eb4928b524975b3bde77b3c0", size = 273586, upload-time = "2025-08-07T13:16:08.004Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/d1/75/10aeeaa3da9332c2e761e4c50d4c3556c21113ee3f0afa2cf5769946f7a3/greenlet-3.2.4-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:299fd615cd8fc86267b47597123e3f43ad79c9d8a22bebdce535e53550763e2f", size = 686346, upload-time = "2025-08-07T13:42:59.944Z" },
|
{ url = "https://files.pythonhosted.org/packages/d1/75/10aeeaa3da9332c2e761e4c50d4c3556c21113ee3f0afa2cf5769946f7a3/greenlet-3.2.4-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:299fd615cd8fc86267b47597123e3f43ad79c9d8a22bebdce535e53550763e2f", size = 686346, upload-time = "2025-08-07T13:42:59.944Z" },
|
||||||
@@ -302,9 +400,56 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/dc/8b/29aae55436521f1d6f8ff4e12fb676f3400de7fcf27fccd1d4d17fd8fecd/greenlet-3.2.4-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:b4a1870c51720687af7fa3e7cda6d08d801dae660f75a76f3845b642b4da6ee1", size = 694659, upload-time = "2025-08-07T13:53:17.759Z" },
|
{ url = "https://files.pythonhosted.org/packages/dc/8b/29aae55436521f1d6f8ff4e12fb676f3400de7fcf27fccd1d4d17fd8fecd/greenlet-3.2.4-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:b4a1870c51720687af7fa3e7cda6d08d801dae660f75a76f3845b642b4da6ee1", size = 694659, upload-time = "2025-08-07T13:53:17.759Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/92/2e/ea25914b1ebfde93b6fc4ff46d6864564fba59024e928bdc7de475affc25/greenlet-3.2.4-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:061dc4cf2c34852b052a8620d40f36324554bc192be474b9e9770e8c042fd735", size = 695355, upload-time = "2025-08-07T13:18:34.517Z" },
|
{ url = "https://files.pythonhosted.org/packages/92/2e/ea25914b1ebfde93b6fc4ff46d6864564fba59024e928bdc7de475affc25/greenlet-3.2.4-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:061dc4cf2c34852b052a8620d40f36324554bc192be474b9e9770e8c042fd735", size = 695355, upload-time = "2025-08-07T13:18:34.517Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/72/60/fc56c62046ec17f6b0d3060564562c64c862948c9d4bc8aa807cf5bd74f4/greenlet-3.2.4-cp314-cp314-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:44358b9bf66c8576a9f57a590d5f5d6e72fa4228b763d0e43fee6d3b06d3a337", size = 657512, upload-time = "2025-08-07T13:18:33.969Z" },
|
{ url = "https://files.pythonhosted.org/packages/72/60/fc56c62046ec17f6b0d3060564562c64c862948c9d4bc8aa807cf5bd74f4/greenlet-3.2.4-cp314-cp314-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:44358b9bf66c8576a9f57a590d5f5d6e72fa4228b763d0e43fee6d3b06d3a337", size = 657512, upload-time = "2025-08-07T13:18:33.969Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/23/6e/74407aed965a4ab6ddd93a7ded3180b730d281c77b765788419484cdfeef/greenlet-3.2.4-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:2917bdf657f5859fbf3386b12d68ede4cf1f04c90c3a6bc1f013dd68a22e2269", size = 1612508, upload-time = "2025-11-04T12:42:23.427Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/0d/da/343cd760ab2f92bac1845ca07ee3faea9fe52bee65f7bcb19f16ad7de08b/greenlet-3.2.4-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:015d48959d4add5d6c9f6c5210ee3803a830dce46356e3bc326d6776bde54681", size = 1680760, upload-time = "2025-11-04T12:42:25.341Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/e3/a5/6ddab2b4c112be95601c13428db1d8b6608a8b6039816f2ba09c346c08fc/greenlet-3.2.4-cp314-cp314-win_amd64.whl", hash = "sha256:e37ab26028f12dbb0ff65f29a8d3d44a765c61e729647bf2ddfbbed621726f01", size = 303425, upload-time = "2025-08-07T13:32:27.59Z" },
|
{ url = "https://files.pythonhosted.org/packages/e3/a5/6ddab2b4c112be95601c13428db1d8b6608a8b6039816f2ba09c346c08fc/greenlet-3.2.4-cp314-cp314-win_amd64.whl", hash = "sha256:e37ab26028f12dbb0ff65f29a8d3d44a765c61e729647bf2ddfbbed621726f01", size = 303425, upload-time = "2025-08-07T13:32:27.59Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "grpcio"
|
||||||
|
version = "1.76.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "typing-extensions" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/b6/e0/318c1ce3ae5a17894d5791e87aea147587c9e702f24122cc7a5c8bbaeeb1/grpcio-1.76.0.tar.gz", hash = "sha256:7be78388d6da1a25c0d5ec506523db58b18be22d9c37d8d3a32c08be4987bd73", size = 12785182, upload-time = "2025-10-21T16:23:12.106Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/fc/ed/71467ab770effc9e8cef5f2e7388beb2be26ed642d567697bb103a790c72/grpcio-1.76.0-cp313-cp313-linux_armv7l.whl", hash = "sha256:26ef06c73eb53267c2b319f43e6634c7556ea37672029241a056629af27c10e2", size = 5807716, upload-time = "2025-10-21T16:21:48.475Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/2c/85/c6ed56f9817fab03fa8a111ca91469941fb514e3e3ce6d793cb8f1e1347b/grpcio-1.76.0-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:45e0111e73f43f735d70786557dc38141185072d7ff8dc1829d6a77ac1471468", size = 11821522, upload-time = "2025-10-21T16:21:51.142Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/ac/31/2b8a235ab40c39cbc141ef647f8a6eb7b0028f023015a4842933bc0d6831/grpcio-1.76.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:83d57312a58dcfe2a3a0f9d1389b299438909a02db60e2f2ea2ae2d8034909d3", size = 6362558, upload-time = "2025-10-21T16:21:54.213Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/bd/64/9784eab483358e08847498ee56faf8ff6ea8e0a4592568d9f68edc97e9e9/grpcio-1.76.0-cp313-cp313-manylinux2014_i686.manylinux_2_17_i686.whl", hash = "sha256:3e2a27c89eb9ac3d81ec8835e12414d73536c6e620355d65102503064a4ed6eb", size = 7049990, upload-time = "2025-10-21T16:21:56.476Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/2b/94/8c12319a6369434e7a184b987e8e9f3b49a114c489b8315f029e24de4837/grpcio-1.76.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:61f69297cba3950a524f61c7c8ee12e55c486cb5f7db47ff9dcee33da6f0d3ae", size = 6575387, upload-time = "2025-10-21T16:21:59.051Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/15/0f/f12c32b03f731f4a6242f771f63039df182c8b8e2cf8075b245b409259d4/grpcio-1.76.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:6a15c17af8839b6801d554263c546c69c4d7718ad4321e3166175b37eaacca77", size = 7166668, upload-time = "2025-10-21T16:22:02.049Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/ff/2d/3ec9ce0c2b1d92dd59d1c3264aaec9f0f7c817d6e8ac683b97198a36ed5a/grpcio-1.76.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:25a18e9810fbc7e7f03ec2516addc116a957f8cbb8cbc95ccc80faa072743d03", size = 8124928, upload-time = "2025-10-21T16:22:04.984Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/1a/74/fd3317be5672f4856bcdd1a9e7b5e17554692d3db9a3b273879dc02d657d/grpcio-1.76.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:931091142fd8cc14edccc0845a79248bc155425eee9a98b2db2ea4f00a235a42", size = 7589983, upload-time = "2025-10-21T16:22:07.881Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/45/bb/ca038cf420f405971f19821c8c15bcbc875505f6ffadafe9ffd77871dc4c/grpcio-1.76.0-cp313-cp313-win32.whl", hash = "sha256:5e8571632780e08526f118f74170ad8d50fb0a48c23a746bef2a6ebade3abd6f", size = 3984727, upload-time = "2025-10-21T16:22:10.032Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/41/80/84087dc56437ced7cdd4b13d7875e7439a52a261e3ab4e06488ba6173b0a/grpcio-1.76.0-cp313-cp313-win_amd64.whl", hash = "sha256:f9f7bd5faab55f47231ad8dba7787866b69f5e93bc306e3915606779bbfb4ba8", size = 4702799, upload-time = "2025-10-21T16:22:12.709Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/b4/46/39adac80de49d678e6e073b70204091e76631e03e94928b9ea4ecf0f6e0e/grpcio-1.76.0-cp314-cp314-linux_armv7l.whl", hash = "sha256:ff8a59ea85a1f2191a0ffcc61298c571bc566332f82e5f5be1b83c9d8e668a62", size = 5808417, upload-time = "2025-10-21T16:22:15.02Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/9c/f5/a4531f7fb8b4e2a60b94e39d5d924469b7a6988176b3422487be61fe2998/grpcio-1.76.0-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:06c3d6b076e7b593905d04fdba6a0525711b3466f43b3400266f04ff735de0cd", size = 11828219, upload-time = "2025-10-21T16:22:17.954Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/4b/1c/de55d868ed7a8bd6acc6b1d6ddc4aa36d07a9f31d33c912c804adb1b971b/grpcio-1.76.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:fd5ef5932f6475c436c4a55e4336ebbe47bd3272be04964a03d316bbf4afbcbc", size = 6367826, upload-time = "2025-10-21T16:22:20.721Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/59/64/99e44c02b5adb0ad13ab3adc89cb33cb54bfa90c74770f2607eea629b86f/grpcio-1.76.0-cp314-cp314-manylinux2014_i686.manylinux_2_17_i686.whl", hash = "sha256:b331680e46239e090f5b3cead313cc772f6caa7d0fc8de349337563125361a4a", size = 7049550, upload-time = "2025-10-21T16:22:23.637Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/43/28/40a5be3f9a86949b83e7d6a2ad6011d993cbe9b6bd27bea881f61c7788b6/grpcio-1.76.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:2229ae655ec4e8999599469559e97630185fdd53ae1e8997d147b7c9b2b72cba", size = 6575564, upload-time = "2025-10-21T16:22:26.016Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/4b/a9/1be18e6055b64467440208a8559afac243c66a8b904213af6f392dc2212f/grpcio-1.76.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:490fa6d203992c47c7b9e4a9d39003a0c2bcc1c9aa3c058730884bbbb0ee9f09", size = 7176236, upload-time = "2025-10-21T16:22:28.362Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/0f/55/dba05d3fcc151ce6e81327541d2cc8394f442f6b350fead67401661bf041/grpcio-1.76.0-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:479496325ce554792dba6548fae3df31a72cef7bad71ca2e12b0e58f9b336bfc", size = 8125795, upload-time = "2025-10-21T16:22:31.075Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/4a/45/122df922d05655f63930cf42c9e3f72ba20aadb26c100ee105cad4ce4257/grpcio-1.76.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:1c9b93f79f48b03ada57ea24725d83a30284a012ec27eab2cf7e50a550cbbbcc", size = 7592214, upload-time = "2025-10-21T16:22:33.831Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/4a/6e/0b899b7f6b66e5af39e377055fb4a6675c9ee28431df5708139df2e93233/grpcio-1.76.0-cp314-cp314-win32.whl", hash = "sha256:747fa73efa9b8b1488a95d0ba1039c8e2dca0f741612d80415b1e1c560febf4e", size = 4062961, upload-time = "2025-10-21T16:22:36.468Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/19/41/0b430b01a2eb38ee887f88c1f07644a1df8e289353b78e82b37ef988fb64/grpcio-1.76.0-cp314-cp314-win_amd64.whl", hash = "sha256:922fa70ba549fce362d2e2871ab542082d66e2aaf0c19480ea453905b01f384e", size = 4834462, upload-time = "2025-10-21T16:22:39.772Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "grpcio-status"
|
||||||
|
version = "1.76.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "googleapis-common-protos" },
|
||||||
|
{ name = "grpcio" },
|
||||||
|
{ name = "protobuf" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/3f/46/e9f19d5be65e8423f886813a2a9d0056ba94757b0c5007aa59aed1a961fa/grpcio_status-1.76.0.tar.gz", hash = "sha256:25fcbfec74c15d1a1cb5da3fab8ee9672852dc16a5a9eeb5baf7d7a9952943cd", size = 13679, upload-time = "2025-10-21T16:28:52.545Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/8c/cc/27ba60ad5a5f2067963e6a858743500df408eb5855e98be778eaef8c9b02/grpcio_status-1.76.0-py3-none-any.whl", hash = "sha256:380568794055a8efbbd8871162df92012e0228a5f6dffaf57f2a00c534103b18", size = 14425, upload-time = "2025-10-21T16:28:40.853Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "idna"
|
name = "idna"
|
||||||
version = "3.11"
|
version = "3.11"
|
||||||
@@ -314,6 +459,15 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/0e/61/66938bbb5fc52dbdf84594873d5b51fb1f7c7794e9c0f5bd885f30bc507b/idna-3.11-py3-none-any.whl", hash = "sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea", size = 71008, upload-time = "2025-10-12T14:55:18.883Z" },
|
{ url = "https://files.pythonhosted.org/packages/0e/61/66938bbb5fc52dbdf84594873d5b51fb1f7c7794e9c0f5bd885f30bc507b/idna-3.11-py3-none-any.whl", hash = "sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea", size = 71008, upload-time = "2025-10-12T14:55:18.883Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "iniconfig"
|
||||||
|
version = "2.3.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/72/34/14ca021ce8e5dfedc35312d08ba8bf51fdd999c576889fc2c24cb97f4f10/iniconfig-2.3.0.tar.gz", hash = "sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730", size = 20503, upload-time = "2025-10-18T21:55:43.219Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/iniconfig-2.3.0-py3-none-any.whl", hash = "sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12", size = 7484, upload-time = "2025-10-18T21:55:41.639Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "mako"
|
name = "mako"
|
||||||
version = "1.3.10"
|
version = "1.3.10"
|
||||||
@@ -387,21 +541,32 @@ dependencies = [
|
|||||||
{ name = "alembic" },
|
{ name = "alembic" },
|
||||||
{ name = "asyncpg" },
|
{ name = "asyncpg" },
|
||||||
{ name = "facebook-business" },
|
{ name = "facebook-business" },
|
||||||
|
{ name = "google-ads" },
|
||||||
{ name = "python-dotenv" },
|
{ name = "python-dotenv" },
|
||||||
{ name = "requests-oauthlib" },
|
{ name = "requests-oauthlib" },
|
||||||
{ name = "sqlalchemy", extra = ["asyncio"] },
|
{ name = "sqlalchemy", extra = ["asyncio"] },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[package.optional-dependencies]
|
||||||
|
test = [
|
||||||
|
{ name = "pytest" },
|
||||||
|
{ name = "pytest-asyncio" },
|
||||||
|
]
|
||||||
|
|
||||||
[package.metadata]
|
[package.metadata]
|
||||||
requires-dist = [
|
requires-dist = [
|
||||||
{ name = "aiohttp", specifier = ">=3.13.1" },
|
{ name = "aiohttp", specifier = ">=3.13.1" },
|
||||||
{ name = "alembic", specifier = ">=1.17.0" },
|
{ name = "alembic", specifier = ">=1.17.0" },
|
||||||
{ name = "asyncpg", specifier = ">=0.30.0" },
|
{ name = "asyncpg", specifier = ">=0.30.0" },
|
||||||
{ name = "facebook-business", specifier = ">=23.0.3" },
|
{ name = "facebook-business", specifier = ">=23.0.3" },
|
||||||
|
{ name = "google-ads", specifier = ">=28.3.0" },
|
||||||
|
{ name = "pytest", marker = "extra == 'test'", specifier = ">=8.0.0" },
|
||||||
|
{ name = "pytest-asyncio", marker = "extra == 'test'", specifier = ">=0.25.0" },
|
||||||
{ name = "python-dotenv", specifier = ">=1.1.1" },
|
{ name = "python-dotenv", specifier = ">=1.1.1" },
|
||||||
{ name = "requests-oauthlib", specifier = ">=2.0.0" },
|
{ name = "requests-oauthlib", specifier = ">=2.0.0" },
|
||||||
{ name = "sqlalchemy", extras = ["asyncio"], specifier = ">=2.0.44" },
|
{ name = "sqlalchemy", extras = ["asyncio"], specifier = ">=2.0.44" },
|
||||||
]
|
]
|
||||||
|
provides-extras = ["test"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "multidict"
|
name = "multidict"
|
||||||
@@ -493,6 +658,24 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/be/9c/92789c596b8df838baa98fa71844d84283302f7604ed565dafe5a6b5041a/oauthlib-3.3.1-py3-none-any.whl", hash = "sha256:88119c938d2b8fb88561af5f6ee0eec8cc8d552b7bb1f712743136eb7523b7a1", size = 160065, upload-time = "2025-06-19T22:48:06.508Z" },
|
{ url = "https://files.pythonhosted.org/packages/be/9c/92789c596b8df838baa98fa71844d84283302f7604ed565dafe5a6b5041a/oauthlib-3.3.1-py3-none-any.whl", hash = "sha256:88119c938d2b8fb88561af5f6ee0eec8cc8d552b7bb1f712743136eb7523b7a1", size = 160065, upload-time = "2025-06-19T22:48:06.508Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "packaging"
|
||||||
|
version = "25.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/a1/d4/1fc4078c65507b51b96ca8f8c3ba19e6a61c8253c72794544580a7b6c24d/packaging-25.0.tar.gz", hash = "sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f", size = 165727, upload-time = "2025-04-19T11:48:59.673Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/20/12/38679034af332785aac8774540895e234f4d07f7545804097de4b666afd8/packaging-25.0-py3-none-any.whl", hash = "sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484", size = 66469, upload-time = "2025-04-19T11:48:57.875Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "pluggy"
|
||||||
|
version = "1.6.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/f9/e2/3e91f31a7d2b083fe6ef3fa267035b518369d9511ffab804f839851d2779/pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3", size = 69412, upload-time = "2025-05-15T12:30:07.975Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "propcache"
|
name = "propcache"
|
||||||
version = "0.4.1"
|
version = "0.4.1"
|
||||||
@@ -562,6 +745,54 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/5b/5a/bc7b4a4ef808fa59a816c17b20c4bef6884daebbdf627ff2a161da67da19/propcache-0.4.1-py3-none-any.whl", hash = "sha256:af2a6052aeb6cf17d3e46ee169099044fd8224cbaf75c76a2ef596e8163e2237", size = 13305, upload-time = "2025-10-08T19:49:00.792Z" },
|
{ url = "https://files.pythonhosted.org/packages/5b/5a/bc7b4a4ef808fa59a816c17b20c4bef6884daebbdf627ff2a161da67da19/propcache-0.4.1-py3-none-any.whl", hash = "sha256:af2a6052aeb6cf17d3e46ee169099044fd8224cbaf75c76a2ef596e8163e2237", size = 13305, upload-time = "2025-10-08T19:49:00.792Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "proto-plus"
|
||||||
|
version = "1.26.1"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "protobuf" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/f4/ac/87285f15f7cce6d4a008f33f1757fb5a13611ea8914eb58c3d0d26243468/proto_plus-1.26.1.tar.gz", hash = "sha256:21a515a4c4c0088a773899e23c7bbade3d18f9c66c73edd4c7ee3816bc96a012", size = 56142, upload-time = "2025-03-10T15:54:38.843Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/4e/6d/280c4c2ce28b1593a19ad5239c8b826871fc6ec275c21afc8e1820108039/proto_plus-1.26.1-py3-none-any.whl", hash = "sha256:13285478c2dcf2abb829db158e1047e2f1e8d63a077d94263c2b88b043c75a66", size = 50163, upload-time = "2025-03-10T15:54:37.335Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "protobuf"
|
||||||
|
version = "6.33.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/19/ff/64a6c8f420818bb873713988ca5492cba3a7946be57e027ac63495157d97/protobuf-6.33.0.tar.gz", hash = "sha256:140303d5c8d2037730c548f8c7b93b20bb1dc301be280c378b82b8894589c954", size = 443463, upload-time = "2025-10-15T20:39:52.159Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/7e/ee/52b3fa8feb6db4a833dfea4943e175ce645144532e8a90f72571ad85df4e/protobuf-6.33.0-cp310-abi3-win32.whl", hash = "sha256:d6101ded078042a8f17959eccd9236fb7a9ca20d3b0098bbcb91533a5680d035", size = 425593, upload-time = "2025-10-15T20:39:40.29Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/7b/c6/7a465f1825872c55e0341ff4a80198743f73b69ce5d43ab18043699d1d81/protobuf-6.33.0-cp310-abi3-win_amd64.whl", hash = "sha256:9a031d10f703f03768f2743a1c403af050b6ae1f3480e9c140f39c45f81b13ee", size = 436882, upload-time = "2025-10-15T20:39:42.841Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/e1/a9/b6eee662a6951b9c3640e8e452ab3e09f117d99fc10baa32d1581a0d4099/protobuf-6.33.0-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:905b07a65f1a4b72412314082c7dbfae91a9e8b68a0cc1577515f8df58ecf455", size = 427521, upload-time = "2025-10-15T20:39:43.803Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/10/35/16d31e0f92c6d2f0e77c2a3ba93185130ea13053dd16200a57434c882f2b/protobuf-6.33.0-cp39-abi3-manylinux2014_aarch64.whl", hash = "sha256:e0697ece353e6239b90ee43a9231318302ad8353c70e6e45499fa52396debf90", size = 324445, upload-time = "2025-10-15T20:39:44.932Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/e6/eb/2a981a13e35cda8b75b5585aaffae2eb904f8f351bdd3870769692acbd8a/protobuf-6.33.0-cp39-abi3-manylinux2014_s390x.whl", hash = "sha256:e0a1715e4f27355afd9570f3ea369735afc853a6c3951a6afe1f80d8569ad298", size = 339159, upload-time = "2025-10-15T20:39:46.186Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/21/51/0b1cbad62074439b867b4e04cc09b93f6699d78fd191bed2bbb44562e077/protobuf-6.33.0-cp39-abi3-manylinux2014_x86_64.whl", hash = "sha256:35be49fd3f4fefa4e6e2aacc35e8b837d6703c37a2168a55ac21e9b1bc7559ef", size = 323172, upload-time = "2025-10-15T20:39:47.465Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/07/d1/0a28c21707807c6aacd5dc9c3704b2aa1effbf37adebd8caeaf68b17a636/protobuf-6.33.0-py3-none-any.whl", hash = "sha256:25c9e1963c6734448ea2d308cfa610e692b801304ba0908d7bfa564ac5132995", size = 170477, upload-time = "2025-10-15T20:39:51.311Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "pyasn1"
|
||||||
|
version = "0.6.1"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/ba/e9/01f1a64245b89f039897cb0130016d79f77d52669aae6ee7b159a6c4c018/pyasn1-0.6.1.tar.gz", hash = "sha256:6f580d2bdd84365380830acf45550f2511469f673cb4a5ae3857a3170128b034", size = 145322, upload-time = "2024-09-10T22:41:42.55Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/c8/f1/d6a797abb14f6283c0ddff96bbdd46937f64122b8c925cab503dd37f8214/pyasn1-0.6.1-py3-none-any.whl", hash = "sha256:0d632f46f2ba09143da3a8afe9e33fb6f92fa2320ab7e886e2d0f7672af84629", size = 83135, upload-time = "2024-09-11T16:00:36.122Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "pyasn1-modules"
|
||||||
|
version = "0.4.2"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "pyasn1" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/e9/e6/78ebbb10a8c8e4b61a59249394a4a594c1a7af95593dc933a349c8d00964/pyasn1_modules-0.4.2.tar.gz", hash = "sha256:677091de870a80aae844b1ca6134f54652fa2c8c5a52aa396440ac3106e941e6", size = 307892, upload-time = "2025-03-28T02:41:22.17Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/47/8d/d529b5d697919ba8c11ad626e835d4039be708a35b0d22de83a269a6682c/pyasn1_modules-0.4.2-py3-none-any.whl", hash = "sha256:29253a9207ce32b64c3ac6600edc75368f98473906e8fd1043bd6b5b1de2c14a", size = 181259, upload-time = "2025-03-28T02:41:19.028Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pycountry"
|
name = "pycountry"
|
||||||
version = "24.6.1"
|
version = "24.6.1"
|
||||||
@@ -571,6 +802,43 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/b1/ec/1fb891d8a2660716aadb2143235481d15ed1cbfe3ad669194690b0604492/pycountry-24.6.1-py3-none-any.whl", hash = "sha256:f1a4fb391cd7214f8eefd39556d740adcc233c778a27f8942c8dca351d6ce06f", size = 6335189, upload-time = "2024-06-01T04:11:49.711Z" },
|
{ url = "https://files.pythonhosted.org/packages/b1/ec/1fb891d8a2660716aadb2143235481d15ed1cbfe3ad669194690b0604492/pycountry-24.6.1-py3-none-any.whl", hash = "sha256:f1a4fb391cd7214f8eefd39556d740adcc233c778a27f8942c8dca351d6ce06f", size = 6335189, upload-time = "2024-06-01T04:11:49.711Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "pygments"
|
||||||
|
version = "2.19.2"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/b0/77/a5b8c569bf593b0140bde72ea885a803b82086995367bf2037de0159d924/pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887", size = 4968631, upload-time = "2025-06-21T13:39:12.283Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b", size = 1225217, upload-time = "2025-06-21T13:39:07.939Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "pytest"
|
||||||
|
version = "8.4.2"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "colorama", marker = "sys_platform == 'win32'" },
|
||||||
|
{ name = "iniconfig" },
|
||||||
|
{ name = "packaging" },
|
||||||
|
{ name = "pluggy" },
|
||||||
|
{ name = "pygments" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/a3/5c/00a0e072241553e1a7496d638deababa67c5058571567b92a7eaa258397c/pytest-8.4.2.tar.gz", hash = "sha256:86c0d0b93306b961d58d62a4db4879f27fe25513d4b969df351abdddb3c30e01", size = 1519618, upload-time = "2025-09-04T14:34:22.711Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/a8/a4/20da314d277121d6534b3a980b29035dcd51e6744bd79075a6ce8fa4eb8d/pytest-8.4.2-py3-none-any.whl", hash = "sha256:872f880de3fc3a5bdc88a11b39c9710c3497a547cfa9320bc3c5e62fbf272e79", size = 365750, upload-time = "2025-09-04T14:34:20.226Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "pytest-asyncio"
|
||||||
|
version = "1.2.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "pytest" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/42/86/9e3c5f48f7b7b638b216e4b9e645f54d199d7abbbab7a64a13b4e12ba10f/pytest_asyncio-1.2.0.tar.gz", hash = "sha256:c609a64a2a8768462d0c99811ddb8bd2583c33fd33cf7f21af1c142e824ffb57", size = 50119, upload-time = "2025-09-12T07:33:53.816Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/04/93/2fa34714b7a4ae72f2f8dad66ba17dd9a2c793220719e736dda28b7aec27/pytest_asyncio-1.2.0-py3-none-any.whl", hash = "sha256:8e17ae5e46d8e7efe51ab6494dd2010f4ca8dae51652aa3c8d55acf50bfb2e99", size = 15095, upload-time = "2025-09-12T07:33:52.639Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "python-dotenv"
|
name = "python-dotenv"
|
||||||
version = "1.1.1"
|
version = "1.1.1"
|
||||||
@@ -580,6 +848,42 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/5f/ed/539768cf28c661b5b068d66d96a2f155c4971a5d55684a514c1a0e0dec2f/python_dotenv-1.1.1-py3-none-any.whl", hash = "sha256:31f23644fe2602f88ff55e1f5c79ba497e01224ee7737937930c448e4d0e24dc", size = 20556, upload-time = "2025-06-24T04:21:06.073Z" },
|
{ url = "https://files.pythonhosted.org/packages/5f/ed/539768cf28c661b5b068d66d96a2f155c4971a5d55684a514c1a0e0dec2f/python_dotenv-1.1.1-py3-none-any.whl", hash = "sha256:31f23644fe2602f88ff55e1f5c79ba497e01224ee7737937930c448e4d0e24dc", size = 20556, upload-time = "2025-06-24T04:21:06.073Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "pyyaml"
|
||||||
|
version = "6.0.3"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/05/8e/961c0007c59b8dd7729d542c61a4d537767a59645b82a0b521206e1e25c2/pyyaml-6.0.3.tar.gz", hash = "sha256:d76623373421df22fb4cf8817020cbb7ef15c725b9d5e45f17e189bfc384190f", size = 130960, upload-time = "2025-09-25T21:33:16.546Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/d1/11/0fd08f8192109f7169db964b5707a2f1e8b745d4e239b784a5a1dd80d1db/pyyaml-6.0.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:8da9669d359f02c0b91ccc01cac4a67f16afec0dac22c2ad09f46bee0697eba8", size = 181669, upload-time = "2025-09-25T21:32:23.673Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/b1/16/95309993f1d3748cd644e02e38b75d50cbc0d9561d21f390a76242ce073f/pyyaml-6.0.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:2283a07e2c21a2aa78d9c4442724ec1eb15f5e42a723b99cb3d822d48f5f7ad1", size = 173252, upload-time = "2025-09-25T21:32:25.149Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/50/31/b20f376d3f810b9b2371e72ef5adb33879b25edb7a6d072cb7ca0c486398/pyyaml-6.0.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee2922902c45ae8ccada2c5b501ab86c36525b883eff4255313a253a3160861c", size = 767081, upload-time = "2025-09-25T21:32:26.575Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/49/1e/a55ca81e949270d5d4432fbbd19dfea5321eda7c41a849d443dc92fd1ff7/pyyaml-6.0.3-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a33284e20b78bd4a18c8c2282d549d10bc8408a2a7ff57653c0cf0b9be0afce5", size = 841159, upload-time = "2025-09-25T21:32:27.727Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/74/27/e5b8f34d02d9995b80abcef563ea1f8b56d20134d8f4e5e81733b1feceb2/pyyaml-6.0.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0f29edc409a6392443abf94b9cf89ce99889a1dd5376d94316ae5145dfedd5d6", size = 801626, upload-time = "2025-09-25T21:32:28.878Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f9/11/ba845c23988798f40e52ba45f34849aa8a1f2d4af4b798588010792ebad6/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f7057c9a337546edc7973c0d3ba84ddcdf0daa14533c2065749c9075001090e6", size = 753613, upload-time = "2025-09-25T21:32:30.178Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/3d/e0/7966e1a7bfc0a45bf0a7fb6b98ea03fc9b8d84fa7f2229e9659680b69ee3/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:eda16858a3cab07b80edaf74336ece1f986ba330fdb8ee0d6c0d68fe82bc96be", size = 794115, upload-time = "2025-09-25T21:32:31.353Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/de/94/980b50a6531b3019e45ddeada0626d45fa85cbe22300844a7983285bed3b/pyyaml-6.0.3-cp313-cp313-win32.whl", hash = "sha256:d0eae10f8159e8fdad514efdc92d74fd8d682c933a6dd088030f3834bc8e6b26", size = 137427, upload-time = "2025-09-25T21:32:32.58Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/97/c9/39d5b874e8b28845e4ec2202b5da735d0199dbe5b8fb85f91398814a9a46/pyyaml-6.0.3-cp313-cp313-win_amd64.whl", hash = "sha256:79005a0d97d5ddabfeeea4cf676af11e647e41d81c9a7722a193022accdb6b7c", size = 154090, upload-time = "2025-09-25T21:32:33.659Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/73/e8/2bdf3ca2090f68bb3d75b44da7bbc71843b19c9f2b9cb9b0f4ab7a5a4329/pyyaml-6.0.3-cp313-cp313-win_arm64.whl", hash = "sha256:5498cd1645aa724a7c71c8f378eb29ebe23da2fc0d7a08071d89469bf1d2defb", size = 140246, upload-time = "2025-09-25T21:32:34.663Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/9d/8c/f4bd7f6465179953d3ac9bc44ac1a8a3e6122cf8ada906b4f96c60172d43/pyyaml-6.0.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:8d1fab6bb153a416f9aeb4b8763bc0f22a5586065f86f7664fc23339fc1c1fac", size = 181814, upload-time = "2025-09-25T21:32:35.712Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/bd/9c/4d95bb87eb2063d20db7b60faa3840c1b18025517ae857371c4dd55a6b3a/pyyaml-6.0.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:34d5fcd24b8445fadc33f9cf348c1047101756fd760b4dacb5c3e99755703310", size = 173809, upload-time = "2025-09-25T21:32:36.789Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/92/b5/47e807c2623074914e29dabd16cbbdd4bf5e9b2db9f8090fa64411fc5382/pyyaml-6.0.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:501a031947e3a9025ed4405a168e6ef5ae3126c59f90ce0cd6f2bfc477be31b7", size = 766454, upload-time = "2025-09-25T21:32:37.966Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/02/9e/e5e9b168be58564121efb3de6859c452fccde0ab093d8438905899a3a483/pyyaml-6.0.3-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:b3bc83488de33889877a0f2543ade9f70c67d66d9ebb4ac959502e12de895788", size = 836355, upload-time = "2025-09-25T21:32:39.178Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/88/f9/16491d7ed2a919954993e48aa941b200f38040928474c9e85ea9e64222c3/pyyaml-6.0.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c458b6d084f9b935061bc36216e8a69a7e293a2f1e68bf956dcd9e6cbcd143f5", size = 794175, upload-time = "2025-09-25T21:32:40.865Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/dd/3f/5989debef34dc6397317802b527dbbafb2b4760878a53d4166579111411e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7c6610def4f163542a622a73fb39f534f8c101d690126992300bf3207eab9764", size = 755228, upload-time = "2025-09-25T21:32:42.084Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/d7/ce/af88a49043cd2e265be63d083fc75b27b6ed062f5f9fd6cdc223ad62f03e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:5190d403f121660ce8d1d2c1bb2ef1bd05b5f68533fc5c2ea899bd15f4399b35", size = 789194, upload-time = "2025-09-25T21:32:43.362Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/23/20/bb6982b26a40bb43951265ba29d4c246ef0ff59c9fdcdf0ed04e0687de4d/pyyaml-6.0.3-cp314-cp314-win_amd64.whl", hash = "sha256:4a2e8cebe2ff6ab7d1050ecd59c25d4c8bd7e6f400f5f82b96557ac0abafd0ac", size = 156429, upload-time = "2025-09-25T21:32:57.844Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f4/f4/a4541072bb9422c8a883ab55255f918fa378ecf083f5b85e87fc2b4eda1b/pyyaml-6.0.3-cp314-cp314-win_arm64.whl", hash = "sha256:93dda82c9c22deb0a405ea4dc5f2d0cda384168e466364dec6255b293923b2f3", size = 143912, upload-time = "2025-09-25T21:32:59.247Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/7c/f9/07dd09ae774e4616edf6cda684ee78f97777bdd15847253637a6f052a62f/pyyaml-6.0.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:02893d100e99e03eda1c8fd5c441d8c60103fd175728e23e431db1b589cf5ab3", size = 189108, upload-time = "2025-09-25T21:32:44.377Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/4e/78/8d08c9fb7ce09ad8c38ad533c1191cf27f7ae1effe5bb9400a46d9437fcf/pyyaml-6.0.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:c1ff362665ae507275af2853520967820d9124984e0f7466736aea23d8611fba", size = 183641, upload-time = "2025-09-25T21:32:45.407Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/7b/5b/3babb19104a46945cf816d047db2788bcaf8c94527a805610b0289a01c6b/pyyaml-6.0.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6adc77889b628398debc7b65c073bcb99c4a0237b248cacaf3fe8a557563ef6c", size = 831901, upload-time = "2025-09-25T21:32:48.83Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/8b/cc/dff0684d8dc44da4d22a13f35f073d558c268780ce3c6ba1b87055bb0b87/pyyaml-6.0.3-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a80cb027f6b349846a3bf6d73b5e95e782175e52f22108cfa17876aaeff93702", size = 861132, upload-time = "2025-09-25T21:32:50.149Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/b1/5e/f77dc6b9036943e285ba76b49e118d9ea929885becb0a29ba8a7c75e29fe/pyyaml-6.0.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:00c4bdeba853cc34e7dd471f16b4114f4162dc03e6b7afcc2128711f0eca823c", size = 839261, upload-time = "2025-09-25T21:32:51.808Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/ce/88/a9db1376aa2a228197c58b37302f284b5617f56a5d959fd1763fb1675ce6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:66e1674c3ef6f541c35191caae2d429b967b99e02040f5ba928632d9a7f0f065", size = 805272, upload-time = "2025-09-25T21:32:52.941Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/da/92/1446574745d74df0c92e6aa4a7b0b3130706a4142b2d1a5869f2eaa423c6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:16249ee61e95f858e83976573de0f5b2893b3677ba71c9dd36b9cf8be9ac6d65", size = 829923, upload-time = "2025-09-25T21:32:54.537Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f0/7a/1c7270340330e575b92f397352af856a8c06f230aa3e76f86b39d01b416a/pyyaml-6.0.3-cp314-cp314t-win_amd64.whl", hash = "sha256:4ad1906908f2f5ae4e5a8ddfce73c320c2a1429ec52eafd27138b7f1cbe341c9", size = 174062, upload-time = "2025-09-25T21:32:55.767Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f1/12/de94a39c2ef588c7e6455cfbe7343d3b2dc9d6b6b2f40c4c6565744c873d/pyyaml-6.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b", size = 149341, upload-time = "2025-09-25T21:32:56.828Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "requests"
|
name = "requests"
|
||||||
version = "2.32.5"
|
version = "2.32.5"
|
||||||
@@ -608,6 +912,18 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/3b/5d/63d4ae3b9daea098d5d6f5da83984853c1bbacd5dc826764b249fe119d24/requests_oauthlib-2.0.0-py2.py3-none-any.whl", hash = "sha256:7dd8a5c40426b779b0868c404bdef9768deccf22749cde15852df527e6269b36", size = 24179, upload-time = "2024-03-22T20:32:28.055Z" },
|
{ url = "https://files.pythonhosted.org/packages/3b/5d/63d4ae3b9daea098d5d6f5da83984853c1bbacd5dc826764b249fe119d24/requests_oauthlib-2.0.0-py2.py3-none-any.whl", hash = "sha256:7dd8a5c40426b779b0868c404bdef9768deccf22749cde15852df527e6269b36", size = 24179, upload-time = "2024-03-22T20:32:28.055Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "rsa"
|
||||||
|
version = "4.9.1"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "pyasn1" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/da/8a/22b7beea3ee0d44b1916c0c1cb0ee3af23b700b6da9f04991899d0c555d4/rsa-4.9.1.tar.gz", hash = "sha256:e7bdbfdb5497da4c07dfd35530e1a902659db6ff241e39d9953cad06ebd0ae75", size = 29034, upload-time = "2025-04-16T09:51:18.218Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/64/8d/0133e4eb4beed9e425d9a98ed6e081a55d195481b7632472be1af08d2f6b/rsa-4.9.1-py3-none-any.whl", hash = "sha256:68635866661c6836b8d39430f97a996acbd61bfa49406748ea243539fe239762", size = 34696, upload-time = "2025-04-16T09:51:17.142Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "six"
|
name = "six"
|
||||||
version = "1.17.0"
|
version = "1.17.0"
|
||||||
|
|||||||
150
view_sql_archive/public/insights_flattened.sql
Normal file
150
view_sql_archive/public/insights_flattened.sql
Normal file
@@ -0,0 +1,150 @@
|
|||||||
|
-- Auto refreshes when new entries get added. New things can be extracted from actions if necessary.
|
||||||
|
|
||||||
|
CREATE MATERIALIZED VIEW adset_insights_flattened AS
|
||||||
|
SELECT
|
||||||
|
time,
|
||||||
|
adset_id,
|
||||||
|
campaign_id,
|
||||||
|
account_id,
|
||||||
|
impressions,
|
||||||
|
clicks,
|
||||||
|
spend,
|
||||||
|
reach,
|
||||||
|
ctr,
|
||||||
|
cpc,
|
||||||
|
cpm,
|
||||||
|
date_preset,
|
||||||
|
date_start,
|
||||||
|
date_stop,
|
||||||
|
fetched_at,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'link_click') AS link_click,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'landing_page_view') AS landing_page_view,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'lead') AS lead
|
||||||
|
FROM adset_insights;
|
||||||
|
|
||||||
|
-- Add indexes for common query patterns
|
||||||
|
|
||||||
|
CREATE INDEX idx_adset_insights_flat_campaign ON adset_insights_flattened(campaign_id);
|
||||||
|
CREATE INDEX idx_adset_insights_flat_date ON adset_insights_flattened(date_start, date_stop);
|
||||||
|
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX idx_adset_insights_flat_unique ON adset_insights_flattened(time, adset_id);
|
||||||
|
REFRESH MATERIALIZED VIEW CONCURRENTLY adset_insights_flattened;
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
--- same or atleast very similar for account_insights
|
||||||
|
|
||||||
|
CREATE MATERIALIZED VIEW account_insights_flattened AS
|
||||||
|
SELECT
|
||||||
|
time,
|
||||||
|
account_id,
|
||||||
|
impressions,
|
||||||
|
clicks,
|
||||||
|
spend,
|
||||||
|
reach,
|
||||||
|
frequency,
|
||||||
|
ctr,
|
||||||
|
cpc,
|
||||||
|
cpm,
|
||||||
|
cpp,
|
||||||
|
date_preset,
|
||||||
|
date_start,
|
||||||
|
date_stop,
|
||||||
|
fetched_at,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'link_click') AS link_click,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'landing_page_view') AS landing_page_view,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'lead') AS lead,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(cost_per_action_type)
|
||||||
|
WHERE value->>'action_type' = 'link_click') AS cost_per_link_click,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(cost_per_action_type)
|
||||||
|
WHERE value->>'action_type' = 'landing_page_view') AS cost_per_landing_page_view,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(cost_per_action_type)
|
||||||
|
WHERE value->>'action_type' = 'lead') AS cost_per_lead
|
||||||
|
|
||||||
|
FROM account_insights;
|
||||||
|
|
||||||
|
|
||||||
|
CREATE INDEX idx_account_insights_flat_date ON account_insights_flattened(date_start, date_stop);
|
||||||
|
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX idx_account_insights_flat_unique ON account_insights_flattened(time, account_id);
|
||||||
|
REFRESH MATERIALIZED VIEW CONCURRENTLY account_insights_flattened;
|
||||||
|
|
||||||
|
|
||||||
|
--- campaign insights
|
||||||
|
|
||||||
|
CREATE MATERIALIZED VIEW campaign_insights_flattened AS
|
||||||
|
SELECT
|
||||||
|
time,
|
||||||
|
account_id,
|
||||||
|
campaign_id,
|
||||||
|
impressions,
|
||||||
|
clicks,
|
||||||
|
spend,
|
||||||
|
reach,
|
||||||
|
ctr,
|
||||||
|
cpc,
|
||||||
|
cpm,
|
||||||
|
date_preset,
|
||||||
|
date_start,
|
||||||
|
date_stop,
|
||||||
|
fetched_at,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'link_click') AS link_click,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'landing_page_view') AS landing_page_view,
|
||||||
|
(SELECT (value->>'value')::numeric
|
||||||
|
FROM jsonb_array_elements(actions)
|
||||||
|
WHERE value->>'action_type' = 'lead') AS lead
|
||||||
|
|
||||||
|
|
||||||
|
FROM campaign_insights;
|
||||||
|
|
||||||
|
CREATE INDEX idx_campaign_insights_flat_date ON campaign_insights_flattened(date_start, date_stop);
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX idx_campaign_insights_flat_unique ON campaign_insights_flattened(time, campaign_id);
|
||||||
|
|
||||||
|
REFRESH MATERIALIZED VIEW CONCURRENTLY campaign_insights_flattened;
|
||||||
|
|
||||||
|
|
||||||
|
-- permissinos
|
||||||
|
|
||||||
|
-- Grant SELECT on the existing materialized view
|
||||||
|
GRANT SELECT ON account_insights_flattened TO grafana;
|
||||||
|
|
||||||
|
GRANT SELECT ON campaign_insights_flattened TO grafana;
|
||||||
|
|
||||||
|
GRANT SELECT ON adset_insights_flattened TO grafana;
|
||||||
|
|
||||||
|
|
||||||
|
-- Grant SELECT on all existing tables and views in the schema
|
||||||
|
GRANT SELECT ON ALL TABLES IN SCHEMA public TO grafana;
|
||||||
|
|
||||||
|
-- Grant SELECT on all future tables and views in the schema
|
||||||
|
ALTER DEFAULT PRIVILEGES IN SCHEMA public
|
||||||
|
GRANT SELECT ON TABLES TO grafana;
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
10
view_sql_archive/view.md
Normal file
10
view_sql_archive/view.md
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
To make handling the dashboard easier its good practice to create views that make the underlying data more accessible. Since data does not get updated that frequently we can also use materialized views to speed up query performance at the cost of storage.
|
||||||
|
|
||||||
|
## Schemas
|
||||||
|
|
||||||
|
public: Contains the data from the meta_api_grabber application. All ad accounts.
|
||||||
|
|
||||||
|
meta: Contains data from airbyte meta api connector. Unfortunatly somewhat bugged for insights on campaign and adset level. Aggregating data from the ads level is possible but much more cumbersome and error prone then querying the stuff directly. Thats why for now I'm continuing to use the meta_api_grabber
|
||||||
|
|
||||||
|
google: Will contain the data from google from the airbyte connector assuming we get access and the connector is good.
|
||||||
|
|
||||||
Reference in New Issue
Block a user