Merging schema_extension #9

Merged
jonas merged 15 commits from schema_extension into main 2025-10-20 07:19:26 +00:00
27 changed files with 5352 additions and 107 deletions
Showing only changes of commit 48113f6592 - Show all commits

View File

@@ -1 +1,3 @@
This python project is managed by uv. Use uv run to execute app and tests.
The Configuration is handled in a config.yaml file. The annotatedyaml library is used to load secrets. !secret SOME_SECRET in the yaml file refers to a secret definition in a secrets.yaml file

108
QUICK_REFERENCE.md Normal file
View File

@@ -0,0 +1,108 @@
# Multi-Worker Quick Reference
## TL;DR
**Problem**: Using 4 workers causes duplicate emails and race conditions.
**Solution**: File-based locking ensures only ONE worker runs schedulers.
## Commands
```bash
# Development (1 worker - auto primary)
uvicorn alpine_bits_python.api:app --reload
# Production (4 workers - one becomes primary)
uvicorn alpine_bits_python.api:app --workers 4 --host 0.0.0.0 --port 8000
# Test worker coordination
uv run python test_worker_coordination.py
# Run all tests
uv run pytest tests/ -v
```
## Check Which Worker is Primary
Look for startup logs:
```
[INFO] Worker startup: pid=1001, primary=True ← PRIMARY
[INFO] Worker startup: pid=1002, primary=False ← SECONDARY
[INFO] Worker startup: pid=1003, primary=False ← SECONDARY
[INFO] Worker startup: pid=1004, primary=False ← SECONDARY
[INFO] Daily report scheduler started ← Only on PRIMARY
```
## Lock File
**Location**: `/tmp/alpinebits_primary_worker.lock`
**Check lock status**:
```bash
# See which PID holds the lock
cat /tmp/alpinebits_primary_worker.lock
# Output: 1001
# Verify process is running
ps aux | grep 1001
```
**Clean stale lock** (if needed):
```bash
rm /tmp/alpinebits_primary_worker.lock
# Then restart application
```
## What Runs Where
| Service | Primary Worker | Secondary Workers |
|---------|---------------|-------------------|
| HTTP requests | ✓ Yes | ✓ Yes |
| Email scheduler | ✓ Yes | ✗ No |
| Error alerts | ✓ Yes | ✓ Yes (all workers can send) |
| DB migrations | ✓ Yes | ✗ No |
| Customer hashing | ✓ Yes | ✗ No |
## Troubleshooting
### All workers think they're primary
**Cause**: Lock file not accessible
**Fix**: Check permissions on `/tmp/` or change lock location
### No worker becomes primary
**Cause**: Stale lock file
**Fix**: `rm /tmp/alpinebits_primary_worker.lock` and restart
### Still getting duplicate emails
**Check**: Are you seeing duplicate **scheduled reports** or **error alerts**?
- Scheduled reports should only come from primary ✓
- Error alerts can come from any worker (by design) ✓
## Code Example
```python
from alpine_bits_python.worker_coordination import is_primary_worker
async def lifespan(app: FastAPI):
# Acquire lock - only one worker succeeds
is_primary, worker_lock = is_primary_worker()
if is_primary:
# Start singleton services
scheduler.start()
# All workers handle requests
yield
# Release lock on shutdown
if worker_lock:
worker_lock.release()
```
## Documentation
- **Full guide**: `docs/MULTI_WORKER_DEPLOYMENT.md`
- **Solution summary**: `SOLUTION_SUMMARY.md`
- **Implementation**: `src/alpine_bits_python/worker_coordination.py`
- **Test script**: `test_worker_coordination.py`

193
SOLUTION_SUMMARY.md Normal file
View File

@@ -0,0 +1,193 @@
# Multi-Worker Deployment Solution Summary
## Problem
When running FastAPI with `uvicorn --workers 4`, the `lifespan` function executes in **all 4 worker processes**, causing:
-**Duplicate email notifications** (4x emails sent)
-**Multiple schedulers** running simultaneously
-**Race conditions** in database operations
## Root Cause
Your original implementation tried to detect the primary worker using:
```python
multiprocessing.current_process().name == "MainProcess"
```
**This doesn't work** because with `uvicorn --workers N`, each worker is a separate process with its own name, and none are reliably named "MainProcess".
## Solution Implemented
### File-Based Worker Locking
We implemented a **file-based locking mechanism** that ensures only ONE worker runs singleton services:
```python
# worker_coordination.py
class WorkerLock:
"""Uses fcntl.flock() to coordinate workers across processes"""
def acquire(self) -> bool:
"""Try to acquire exclusive lock - only one process succeeds"""
fcntl.flock(self.lock_fd.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
```
### Updated Lifespan Function
```python
async def lifespan(app: FastAPI):
# File-based lock ensures only one worker is primary
is_primary, worker_lock = is_primary_worker()
if is_primary:
# ✓ Start email scheduler (ONCE)
# ✓ Run database migrations (ONCE)
# ✓ Start background tasks (ONCE)
else:
# Skip singleton services
pass
# All workers handle HTTP requests normally
yield
# Release lock on shutdown
if worker_lock:
worker_lock.release()
```
## How It Works
```
uvicorn --workers 4
├─ Worker 0 → tries lock → ✓ SUCCESS → PRIMARY (runs schedulers)
├─ Worker 1 → tries lock → ✗ BUSY → SECONDARY (handles requests)
├─ Worker 2 → tries lock → ✗ BUSY → SECONDARY (handles requests)
└─ Worker 3 → tries lock → ✗ BUSY → SECONDARY (handles requests)
```
## Verification
### Test Results
```bash
$ uv run python test_worker_coordination.py
Worker 0 (PID 30773): ✓ I am PRIMARY
Worker 1 (PID 30774): ✗ I am SECONDARY
Worker 2 (PID 30775): ✗ I am SECONDARY
Worker 3 (PID 30776): ✗ I am SECONDARY
✓ Test complete: Only ONE worker should have been PRIMARY
```
### All Tests Pass
```bash
$ uv run pytest tests/ -v
======================= 120 passed, 23 warnings in 1.96s =======================
```
## Files Modified
1. **`worker_coordination.py`** (NEW)
- `WorkerLock` class with `fcntl` file locking
- `is_primary_worker()` function for easy integration
2. **`api.py`** (MODIFIED)
- Import `is_primary_worker` from worker_coordination
- Replace manual worker detection with file-based locking
- Use `is_primary` flag to conditionally start schedulers
- Release lock on shutdown
## Advantages of This Solution
**No external dependencies** - uses standard library `fcntl`
**Automatic failover** - if primary crashes, lock is auto-released
**Works with any ASGI server** - uvicorn, gunicorn, hypercorn
**Simple and reliable** - battle-tested Unix file locking
**No race conditions** - atomic lock acquisition
**Production-ready** - handles edge cases gracefully
## Usage
### Development (Single Worker)
```bash
uvicorn alpine_bits_python.api:app --reload
# Single worker becomes primary automatically
```
### Production (Multiple Workers)
```bash
uvicorn alpine_bits_python.api:app --workers 4
# Worker that starts first becomes primary
# Others become secondary workers
```
### Check Logs
```
[INFO] Worker startup: process=SpawnProcess-1, pid=1001, primary=True
[INFO] Worker startup: process=SpawnProcess-2, pid=1002, primary=False
[INFO] Worker startup: process=SpawnProcess-3, pid=1003, primary=False
[INFO] Worker startup: process=SpawnProcess-4, pid=1004, primary=False
[INFO] Daily report scheduler started # ← Only on primary!
```
## What This Fixes
| Issue | Before | After |
|-------|--------|-------|
| **Email notifications** | Sent 4x (one per worker) | Sent 1x (only primary) |
| **Daily report scheduler** | 4 schedulers running | 1 scheduler running |
| **Customer hashing** | Race condition across workers | Only primary hashes |
| **Startup logs** | Confusing worker detection | Clear primary/secondary status |
## Alternative Approaches Considered
### ❌ Environment Variables
```bash
ALPINEBITS_PRIMARY_WORKER=true uvicorn app:app
```
**Problem**: Manual configuration, no automatic failover
### ❌ Process Name Detection
```python
multiprocessing.current_process().name == "MainProcess"
```
**Problem**: Unreliable with uvicorn's worker processes
### ✅ Redis-Based Locking
```python
redis.lock.Lock(redis_client, "primary_worker")
```
**When to use**: Multi-container deployments (Docker Swarm, Kubernetes)
## Recommendations
### For Single-Host Deployments (Your Case)
✅ Use the file-based locking solution (implemented)
### For Multi-Container Deployments
Consider Redis-based locks if deploying across multiple containers/hosts:
```python
# In worker_coordination.py, add Redis option
def is_primary_worker(use_redis=False):
if use_redis:
return redis_based_lock()
else:
return file_based_lock() # Current implementation
```
## Conclusion
Your FastAPI application now correctly handles multiple workers:
- ✅ Only **one worker** runs singleton services (schedulers, migrations)
- ✅ All **workers** handle HTTP requests concurrently
- ✅ No **duplicate email notifications**
- ✅ No **race conditions** in database operations
-**Automatic failover** if primary worker crashes
**Result**: You get the performance benefits of multiple workers WITHOUT the duplicate notification problem! 🎉

View File

@@ -14059,87 +14059,17 @@ IndexError: list index out of range
2025-10-10 10:59:53 - alpine_bits_python.api - INFO - Hotel 39040_001 has no push_endpoint configured
2025-10-10 10:59:53 - alpine_bits_python.api - INFO - Database tables checked/created at startup.
2025-10-10 10:59:53 - httpx - INFO - HTTP Request: PUT http://testserver/api/hoteldata/conversions_import/test_reservation.xml "HTTP/1.1 401 Unauthorized"
2025-10-16 15:34:21 - root - INFO - Logging to file: alpinebits.log
2025-10-16 15:34:21 - root - INFO - Logging configured at INFO level
2025-10-16 15:34:21 - __main__ - INFO - ============================================================
2025-10-16 15:34:21 - __main__ - INFO - Starting RoomTypes Migration
2025-10-16 15:34:21 - __main__ - INFO - ============================================================
2025-10-16 15:34:21 - __main__ - INFO - Database URL: /alpinebits.db
2025-10-16 15:34:21 - __main__ - INFO - Checking which columns need to be added to reservations table...
2025-10-16 15:34:21 - __main__ - ERROR - Migration failed: 'Connection' object has no attribute 'sync_connection'
Traceback (most recent call last):
File "/home/divusjulius/repos/alpinebits_python/src/alpine_bits_python/util/migrate_add_room_types.py", line 97, in main
await add_room_types_columns(engine)
File "/home/divusjulius/repos/alpinebits_python/src/alpine_bits_python/util/migrate_add_room_types.py", line 53, in add_room_types_columns
columns_exist = await check_columns_exist(engine, table_name, columns_to_add)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/divusjulius/repos/alpinebits_python/src/alpine_bits_python/util/migrate_add_room_types.py", line 41, in check_columns_exist
result = await conn.run_sync(_check)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/divusjulius/repos/alpinebits_python/.venv/lib/python3.13/site-packages/sqlalchemy/ext/asyncio/engine.py", line 887, in run_sync
return await greenlet_spawn(
^^^^^^^^^^^^^^^^^^^^^
fn, self._proxied, *arg, _require_await=False, **kw
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/divusjulius/repos/alpinebits_python/.venv/lib/python3.13/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 190, in greenlet_spawn
result = context.switch(*args, **kwargs)
File "/home/divusjulius/repos/alpinebits_python/src/alpine_bits_python/util/migrate_add_room_types.py", line 37, in _check
inspector = inspect(connection.sync_connection)
^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'Connection' object has no attribute 'sync_connection'
2025-10-16 15:34:36 - root - INFO - Logging to file: alpinebits.log
2025-10-16 15:34:36 - root - INFO - Logging configured at INFO level
2025-10-16 15:34:36 - __main__ - INFO - ============================================================
2025-10-16 15:34:36 - __main__ - INFO - Starting RoomTypes Migration
2025-10-16 15:34:36 - __main__ - INFO - ============================================================
2025-10-16 15:34:36 - __main__ - INFO - Database URL: /alpinebits.db
2025-10-16 15:34:36 - __main__ - INFO - Checking which columns need to be added to reservations table...
2025-10-16 15:34:36 - __main__ - ERROR - Migration failed: reservations
Traceback (most recent call last):
File "/home/divusjulius/repos/alpinebits_python/src/alpine_bits_python/util/migrate_add_room_types.py", line 97, in main
await add_room_types_columns(engine)
File "/home/divusjulius/repos/alpinebits_python/src/alpine_bits_python/util/migrate_add_room_types.py", line 53, in add_room_types_columns
columns_exist = await check_columns_exist(engine, table_name, columns_to_add)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/divusjulius/repos/alpinebits_python/src/alpine_bits_python/util/migrate_add_room_types.py", line 41, in check_columns_exist
result = await conn.run_sync(_check)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/divusjulius/repos/alpinebits_python/.venv/lib/python3.13/site-packages/sqlalchemy/ext/asyncio/engine.py", line 887, in run_sync
return await greenlet_spawn(
^^^^^^^^^^^^^^^^^^^^^
fn, self._proxied, *arg, _require_await=False, **kw
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/divusjulius/repos/alpinebits_python/.venv/lib/python3.13/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 203, in greenlet_spawn
result = context.switch(value)
File "/home/divusjulius/repos/alpinebits_python/src/alpine_bits_python/util/migrate_add_room_types.py", line 38, in _check
existing_cols = [col['name'] for col in inspector.get_columns(table_name)]
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/home/divusjulius/repos/alpinebits_python/.venv/lib/python3.13/site-packages/sqlalchemy/engine/reflection.py", line 869, in get_columns
col_defs = self.dialect.get_columns(
conn, table_name, schema, info_cache=self.info_cache, **kw
)
File "<string>", line 2, in get_columns
File "/home/divusjulius/repos/alpinebits_python/.venv/lib/python3.13/site-packages/sqlalchemy/engine/reflection.py", line 106, in cache
ret = fn(self, con, *args, **kw)
File "/home/divusjulius/repos/alpinebits_python/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/sqlite/base.py", line 2412, in get_columns
raise exc.NoSuchTableError(
f"{schema}.{table_name}" if schema else table_name
)
sqlalchemy.exc.NoSuchTableError: reservations
2025-10-16 15:34:52 - root - INFO - Logging to file: alpinebits.log
2025-10-16 15:34:52 - root - INFO - Logging configured at INFO level
2025-10-16 15:34:52 - __main__ - INFO - ============================================================
2025-10-16 15:34:52 - __main__ - INFO - Starting RoomTypes Migration
2025-10-16 15:34:52 - __main__ - INFO - ============================================================
2025-10-16 15:34:52 - __main__ - INFO - Database URL: /alpinebits.db
2025-10-16 15:34:52 - __main__ - INFO - Ensuring database tables exist...
2025-10-16 15:34:52 - __main__ - INFO - Database tables checked/created.
2025-10-16 15:34:52 - __main__ - INFO - Checking which columns need to be added to reservations table...
2025-10-16 15:34:52 - __main__ - INFO - All RoomTypes columns already exist in reservations table. No migration needed.
2025-10-16 15:34:52 - __main__ - INFO - ============================================================
2025-10-16 15:34:52 - __main__ - INFO - Migration completed successfully!
2025-10-16 15:34:52 - __main__ - INFO - ============================================================
2025-10-15 08:49:50 - root - INFO - Logging to file: alpinebits.log
2025-10-15 08:49:50 - root - INFO - Logging configured at INFO level
2025-10-15 08:49:52 - alpine_bits_python.email_service - INFO - Email service initialized: smtp.gmail.com:587
2025-10-15 08:49:52 - root - INFO - Logging to file: alpinebits.log
2025-10-15 08:49:52 - root - INFO - Logging configured at INFO level
2025-10-15 08:49:54 - alpine_bits_python.email_service - INFO - Email service initialized: smtp.gmail.com:587
2025-10-15 08:52:37 - root - INFO - Logging to file: alpinebits.log
2025-10-15 08:52:37 - root - INFO - Logging configured at INFO level
2025-10-15 08:52:54 - root - INFO - Logging to file: alpinebits.log
2025-10-15 08:52:54 - root - INFO - Logging configured at INFO level
2025-10-15 08:52:56 - alpine_bits_python.email_service - INFO - Email service initialized: smtp.titan.email:465
2025-10-15 08:52:56 - root - INFO - Logging to file: alpinebits.log
2025-10-15 08:52:56 - root - INFO - Logging configured at INFO level
2025-10-15 08:52:58 - alpine_bits_python.email_service - INFO - Email service initialized: smtp.titan.email:465

View File

@@ -14,8 +14,6 @@ server:
companyname: "99tales Gmbh"
res_id_source_context: "99tales"
logger:
level: "INFO" # Set to DEBUG for more verbose output
file: "alpinebits.log" # Log file path, or null for console only
@@ -39,3 +37,81 @@ alpine_bits_auth:
hotel_name: "Residence Erika"
username: "erika"
password: !secret ERIKA_PASSWORD
api_tokens:
- tLTI8wXF1OVEvUX7kdZRhSW3Qr5feBCz0mHo-kbnEp0
# Email configuration for monitoring and alerts
email:
# SMTP server configuration
smtp:
host: "smtp.titan.email" # Your SMTP server
port: 465 # Usually 587 for TLS, 465 for SSL
username: info@99tales.net # SMTP username
password: !secret EMAIL_PASSWORD # SMTP password
use_tls: false # Use STARTTLS
use_ssl: true # Use SSL/TLS from start
# Email addresses
from_address: "info@99tales.net" # Sender address
from_name: "AlpineBits Monitor" # Sender display name
# Monitoring and alerting
monitoring:
# Daily report configuration
daily_report:
enabled: false # Set to true to enable daily reports
recipients:
- "jonas@vaius.ai"
#- "dev@99tales.com"
send_time: "08:00" # Time to send daily report (24h format, local time)
include_stats: true # Include reservation/customer stats
include_errors: true # Include error summary
# Error alert configuration (hybrid approach)
error_alerts:
enabled: false # Set to true to enable error alerts
recipients:
- "jonas@vaius.ai"
#- "oncall@99tales.com"
# Alert is sent immediately if threshold is reached
error_threshold: 5 # Send immediate alert after N errors
# Otherwise, alert is sent after buffer time expires
buffer_minutes: 15 # Wait N minutes before sending buffered errors
# Cooldown period to prevent alert spam
cooldown_minutes: 15 # Wait N min before sending another alert
# Error severity levels to monitor
log_levels:
- "ERROR"
- "CRITICAL"
# Pushover configuration for push notifications (alternative to email)
pushover:
# Pushover API credentials (get from https://pushover.net)
user_key: !secret PUSHOVER_USER_KEY # Your user/group key
api_token: !secret PUSHOVER_API_TOKEN # Your application API token
# Monitoring and alerting (same structure as email)
monitoring:
# Daily report configuration
daily_report:
enabled: true # Set to true to enable daily reports
send_time: "08:00" # Time to send daily report (24h format, local time)
include_stats: true # Include reservation/customer stats
include_errors: true # Include error summary
priority: 0 # Pushover priority: -2=lowest, -1=low, 0=normal, 1=high, 2=emergency
# Error alert configuration (hybrid approach)
error_alerts:
enabled: true # Set to true to enable error alerts
# Alert is sent immediately if threshold is reached
error_threshold: 5 # Send immediate alert after N errors
# Otherwise, alert is sent after buffer time expires
buffer_minutes: 15 # Wait N minutes before sending buffered errors
# Cooldown period to prevent alert spam
cooldown_minutes: 15 # Wait N min before sending another alert
# Error severity levels to monitor
log_levels:
- "ERROR"
- "CRITICAL"
priority: 1 # Pushover priority: -2=lowest, -1=low, 0=normal, 1=high, 2=emergency

423
docs/EMAIL_MONITORING.md Normal file
View File

@@ -0,0 +1,423 @@
# Email Monitoring and Alerting
This document describes the email monitoring and alerting system for the AlpineBits Python server.
## Overview
The email monitoring system provides two main features:
1. **Error Alerts**: Automatic email notifications when errors occur in the application
2. **Daily Reports**: Scheduled daily summary emails with statistics and error logs
## Architecture
### Components
- **EmailService** ([email_service.py](../src/alpine_bits_python/email_service.py)): Core SMTP email sending functionality
- **EmailAlertHandler** ([email_monitoring.py](../src/alpine_bits_python/email_monitoring.py)): Custom logging handler that captures errors and sends alerts
- **DailyReportScheduler** ([email_monitoring.py](../src/alpine_bits_python/email_monitoring.py)): Background task that sends daily reports
### How It Works
#### Error Alerts (Hybrid Approach)
The `EmailAlertHandler` uses a **hybrid threshold + time-based** approach:
1. **Immediate Alerts**: If the error threshold is reached (e.g., 5 errors), an alert email is sent immediately
2. **Buffered Alerts**: Otherwise, errors accumulate in a buffer and are sent after the buffer duration (e.g., 15 minutes)
3. **Cooldown Period**: After sending an alert, the system waits for a cooldown period before sending another alert to prevent spam
**Flow Diagram:**
```
Error occurs
Add to buffer
Buffer >= threshold? ──Yes──> Send immediate alert
↓ No ↓
Wait for buffer time Reset buffer
↓ ↓
Send buffered alert Enter cooldown
Reset buffer
```
#### Daily Reports
The `DailyReportScheduler` runs as a background task that:
1. Waits until the configured send time (e.g., 8:00 AM)
2. Collects statistics from the application
3. Gathers errors that occurred during the day
4. Formats and sends an email report
5. Clears the error log
6. Schedules the next report for the following day
## Configuration
### Email Configuration Keys
Add the following to your [config.yaml](../config/config.yaml):
```yaml
email:
# SMTP server configuration
smtp:
host: "smtp.gmail.com" # Your SMTP server hostname
port: 587 # SMTP port (587 for TLS, 465 for SSL)
username: !secret EMAIL_USERNAME # SMTP username (use !secret for env vars)
password: !secret EMAIL_PASSWORD # SMTP password (use !secret for env vars)
use_tls: true # Use STARTTLS encryption
use_ssl: false # Use SSL/TLS from start (mutually exclusive with use_tls)
# Sender information
from_address: "noreply@99tales.com"
from_name: "AlpineBits Monitor"
# Monitoring and alerting
monitoring:
# Daily report configuration
daily_report:
enabled: true # Enable/disable daily reports
recipients:
- "admin@99tales.com"
- "dev@99tales.com"
send_time: "08:00" # Time to send (24h format, local time)
include_stats: true # Include application statistics
include_errors: true # Include error summary
# Error alert configuration
error_alerts:
enabled: true # Enable/disable error alerts
recipients:
- "alerts@99tales.com"
- "oncall@99tales.com"
error_threshold: 5 # Send immediate alert after N errors
buffer_minutes: 15 # Wait N minutes before sending buffered errors
cooldown_minutes: 15 # Wait N minutes before sending another alert
log_levels: # Log levels to monitor
- "ERROR"
- "CRITICAL"
```
### Environment Variables
For security, store sensitive credentials in environment variables:
```bash
# Create a .env file (never commit this!)
EMAIL_USERNAME=your-smtp-username@gmail.com
EMAIL_PASSWORD=your-smtp-app-password
```
The `annotatedyaml` library automatically loads values marked with `!secret` from environment variables.
### Gmail Configuration
If using Gmail, you need to:
1. Enable 2-factor authentication on your Google account
2. Generate an "App Password" for SMTP access
3. Use the app password as `EMAIL_PASSWORD`
**Gmail Settings:**
```yaml
smtp:
host: "smtp.gmail.com"
port: 587
use_tls: true
use_ssl: false
```
### Other SMTP Providers
**SendGrid:**
```yaml
smtp:
host: "smtp.sendgrid.net"
port: 587
username: "apikey"
password: !secret SENDGRID_API_KEY
use_tls: true
```
**AWS SES:**
```yaml
smtp:
host: "email-smtp.us-east-1.amazonaws.com"
port: 587
username: !secret AWS_SES_USERNAME
password: !secret AWS_SES_PASSWORD
use_tls: true
```
## Usage
### Automatic Error Monitoring
Once configured, the system automatically captures all `ERROR` and `CRITICAL` log messages:
```python
from alpine_bits_python.logging_config import get_logger
_LOGGER = get_logger(__name__)
# This error will be captured and sent via email
_LOGGER.error("Database connection failed")
# This will also be captured
try:
risky_operation()
except Exception:
_LOGGER.exception("Operation failed") # Includes stack trace
```
### Triggering Test Alerts
To test your email configuration, you can manually trigger errors:
```python
import logging
_LOGGER = logging.getLogger(__name__)
# Generate multiple errors to trigger immediate alert (if threshold = 5)
for i in range(5):
_LOGGER.error(f"Test error {i + 1}")
```
### Daily Report Statistics
To include custom statistics in daily reports, set a stats collector function:
```python
async def collect_stats():
"""Collect application statistics for daily report."""
return {
"total_reservations": await count_reservations(),
"new_customers": await count_new_customers(),
"active_hotels": await count_active_hotels(),
"api_requests": get_request_count(),
}
# Register the collector
report_scheduler = app.state.report_scheduler
if report_scheduler:
report_scheduler.set_stats_collector(collect_stats)
```
## Email Templates
### Error Alert Email
**Subject:** 🚨 AlpineBits Error Alert: 5 errors (threshold exceeded)
**Body:**
```
Error Alert - 2025-10-15 14:30:45
======================================================================
Alert Type: Immediate Alert
Error Count: 5
Time Range: 14:25:00 to 14:30:00
Reason: (threshold of 5 exceeded)
======================================================================
Errors:
----------------------------------------------------------------------
[2025-10-15 14:25:12] ERROR: Database connection timeout
Module: db:245 (alpine_bits_python.db)
[2025-10-15 14:26:34] ERROR: Failed to process reservation
Module: api:567 (alpine_bits_python.api)
Exception:
Traceback (most recent call last):
...
----------------------------------------------------------------------
Generated by AlpineBits Email Monitoring at 2025-10-15 14:30:45
```
### Daily Report Email
**Subject:** AlpineBits Daily Report - 2025-10-15
**Body (HTML):**
```html
AlpineBits Daily Report
Date: 2025-10-15
Statistics
┌────────────────────────┬────────┐
│ Metric │ Value │
├────────────────────────┼────────┤
│ total_reservations │ 42 │
│ new_customers │ 15 │
│ active_hotels │ 4 │
│ api_requests │ 1,234 │
└────────────────────────┴────────┘
Errors (3)
┌──────────────┬──────────┬─────────────────────────┐
│ Time │ Level │ Message │
├──────────────┼──────────┼─────────────────────────┤
│ 08:15:23 │ ERROR │ Connection timeout │
│ 12:45:10 │ ERROR │ Invalid form data │
│ 18:30:00 │ CRITICAL │ Database unavailable │
└──────────────┴──────────┴─────────────────────────┘
Generated by AlpineBits Server
```
## Monitoring and Troubleshooting
### Check Email Configuration
```python
from alpine_bits_python.email_service import create_email_service
from alpine_bits_python.config_loader import load_config
config = load_config()
email_service = create_email_service(config)
if email_service:
print("✓ Email service configured")
else:
print("✗ Email service not configured")
```
### Test Email Sending
```python
import asyncio
from alpine_bits_python.email_service import EmailService, EmailConfig
async def test_email():
config = EmailConfig({
"smtp": {
"host": "smtp.gmail.com",
"port": 587,
"username": "your-email@gmail.com",
"password": "your-app-password",
"use_tls": True,
},
"from_address": "sender@example.com",
"from_name": "Test",
})
service = EmailService(config)
result = await service.send_email(
recipients=["recipient@example.com"],
subject="Test Email",
body="This is a test email from AlpineBits server.",
)
if result:
print("✓ Email sent successfully")
else:
print("✗ Email sending failed")
asyncio.run(test_email())
```
### Common Issues
**Issue: "Authentication failed"**
- Verify SMTP username and password are correct
- For Gmail, ensure you're using an App Password, not your regular password
- Check that 2FA is enabled on Gmail
**Issue: "Connection timeout"**
- Verify SMTP host and port are correct
- Check firewall rules allow outbound SMTP connections
- Try using port 465 with SSL instead of 587 with TLS
**Issue: "No email alerts received"**
- Check that `enabled: true` in config
- Verify recipient email addresses are correct
- Check application logs for email sending errors
- Ensure errors are being logged at ERROR or CRITICAL level
**Issue: "Too many emails being sent"**
- Increase `cooldown_minutes` to reduce alert frequency
- Increase `buffer_minutes` to batch more errors together
- Increase `error_threshold` to only alert on serious issues
## Performance Considerations
### SMTP is Blocking
Email sending uses the standard Python `smtplib`, which performs blocking I/O. To prevent blocking the async event loop:
- Email operations are automatically run in a thread pool executor
- This happens transparently via `loop.run_in_executor()`
- No performance impact on request handling
### Memory Usage
- Error buffer size is limited by `buffer_minutes` duration
- Old errors are automatically cleared after sending
- Daily report error log is cleared after each report
- Typical memory usage: <1 MB for error buffering
### Error Handling
- Email sending failures are logged but never crash the application
- If SMTP is unavailable, errors are logged to console/file as normal
- The logging handler has exception safety - it will never cause application failures
## Security Considerations
1. **Never commit credentials to git**
- Use `!secret` annotation in YAML
- Store credentials in environment variables
- Add `.env` to `.gitignore`
2. **Use TLS/SSL encryption**
- Always set `use_tls: true` or `use_ssl: true`
- Never send credentials in plaintext
3. **Limit email recipients**
- Only send alerts to authorized personnel
- Use dedicated monitoring email addresses
- Consider using distribution lists
4. **Sensitive data in logs**
- Be careful not to log passwords, API keys, or PII
- Error messages in emails may contain sensitive context
- Review log messages before enabling email alerts
## Testing
Run the test suite:
```bash
# Test email service only
uv run pytest tests/test_email_service.py -v
# Test with coverage
uv run pytest tests/test_email_service.py --cov=alpine_bits_python.email_service --cov=alpine_bits_python.email_monitoring
```
## Future Enhancements
Potential improvements for future versions:
- [ ] Support for email templates (Jinja2)
- [ ] Configurable retry logic for failed sends
- [ ] Email queuing for high-volume scenarios
- [ ] Integration with external monitoring services (PagerDuty, Slack)
- [ ] Weekly/monthly report options
- [ ] Custom alert rules based on error patterns
- [ ] Email attachments for detailed logs
- [ ] HTML email styling improvements
## References
- [Python smtplib Documentation](https://docs.python.org/3/library/smtplib.html)
- [Python logging Documentation](https://docs.python.org/3/library/logging.html)
- [Gmail SMTP Settings](https://support.google.com/mail/answer/7126229)
- [annotatedyaml Documentation](https://github.com/yourusername/annotatedyaml)

View File

@@ -0,0 +1,301 @@
# Email Monitoring Implementation Summary
## Overview
Successfully implemented a comprehensive email monitoring and alerting system for the AlpineBits Python server with proper configuration schema validation.
## Implementation Completed
### 1. Core Components ✅
- **[email_service.py](../src/alpine_bits_python/email_service.py)** - SMTP email service with TLS/SSL support
- **[email_monitoring.py](../src/alpine_bits_python/email_monitoring.py)** - Logging integration with hybrid alert strategy
- **[logging_config.py](../src/alpine_bits_python/logging_config.py)** - Integration with existing logging system
- **[api.py](../src/alpine_bits_python/api.py)** - Lifecycle management (startup/shutdown)
- **[config_loader.py](../src/alpine_bits_python/config_loader.py)** - **Schema validation for email config**
### 2. Configuration Schema ✅
Added comprehensive Voluptuous schemas to `config_loader.py`:
```python
# SMTP configuration
smtp_schema = Schema({
Required("host", default="localhost"): str,
Required("port", default=587): Range(min=1, max=65535),
Optional("username"): str,
Optional("password"): str,
Required("use_tls", default=True): Boolean(),
Required("use_ssl", default=False): Boolean(),
})
# Error alerts configuration
error_alerts_schema = Schema({
Required("enabled", default=False): Boolean(),
Optional("recipients", default=[]): [str],
Required("error_threshold", default=5): Range(min=1),
Required("buffer_minutes", default=15): Range(min=1),
Required("cooldown_minutes", default=15): Range(min=0),
Required("log_levels", default=["ERROR", "CRITICAL"]): [
In(["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"])
],
})
# Daily report configuration
daily_report_schema = Schema({
Required("enabled", default=False): Boolean(),
Optional("recipients", default=[]): [str],
Required("send_time", default="08:00"): str,
Required("include_stats", default=True): Boolean(),
Required("include_errors", default=True): Boolean(),
})
```
**Benefits:**
- ✅ Type validation (strings, integers, booleans, lists)
- ✅ Range validation (port 1-65535, positive integers)
- ✅ Enum validation (log levels must be valid)
- ✅ Default values for all optional fields
- ✅ Prevents typos and misconfigurations
- ✅ Clear error messages when config is invalid
### 3. Configuration Files ✅
**[config/config.yaml](../config/config.yaml)** - Email configuration (currently disabled by default):
```yaml
email:
smtp:
host: "smtp.gmail.com"
port: 587
username: !secret EMAIL_USERNAME
password: !secret EMAIL_PASSWORD
use_tls: true
from_address: "noreply@99tales.com"
from_name: "AlpineBits Monitor"
monitoring:
error_alerts:
enabled: false # Set to true to enable
recipients: ["alerts@99tales.com"]
error_threshold: 5
buffer_minutes: 15
cooldown_minutes: 15
daily_report:
enabled: false # Set to true to enable
recipients: ["admin@99tales.com"]
send_time: "08:00"
```
**[config/.env.example](../config/.env.example)** - Template for environment variables
**[config/secrets.yaml](../config/secrets.yaml)** - Secret values (not committed to git)
### 4. Testing ✅
**[tests/test_email_service.py](../tests/test_email_service.py)** - Comprehensive test suite (17 tests, all passing)
Test coverage:
- ✅ EmailConfig initialization and defaults
- ✅ Email sending (plain text and HTML)
- ✅ Error record creation and formatting
- ✅ EmailAlertHandler buffering and thresholds
- ✅ DailyReportScheduler initialization and scheduling
- ✅ Config schema validation
**[examples/test_email_monitoring.py](../examples/test_email_monitoring.py)** - Interactive test script
### 5. Documentation ✅
- **[EMAIL_MONITORING.md](./EMAIL_MONITORING.md)** - Complete documentation
- **[EMAIL_MONITORING_QUICKSTART.md](./EMAIL_MONITORING_QUICKSTART.md)** - Quick start guide
- **[EMAIL_MONITORING_IMPLEMENTATION.md](./EMAIL_MONITORING_IMPLEMENTATION.md)** - This document
## Key Features
### Hybrid Alert Strategy
The system uses a smart hybrid approach that balances responsiveness with spam prevention:
1. **Immediate Alerts** - When error threshold is reached (e.g., 5 errors), send alert immediately
2. **Buffered Alerts** - Otherwise, accumulate errors and send after buffer time (e.g., 15 minutes)
3. **Cooldown Period** - After sending, wait before sending another alert to prevent spam
### Automatic Integration
- **Zero Code Changes Required** - All existing `logger.error()` calls automatically trigger email alerts
- **Non-Blocking** - SMTP operations run in thread pool, won't block async requests
- **Thread-Safe** - Works correctly in multi-threaded async environment
- **Production Ready** - Proper error handling, never crashes the application
### Schema Validation
The Voluptuous schema ensures:
- ✅ All config values are valid before the app starts
- ✅ Clear error messages for misconfigurations
- ✅ Sensible defaults for optional values
- ✅ Type safety (no runtime type errors)
- ✅ PREVENT_EXTRA prevents typos in config keys
## Testing Results
### Schema Validation Test
```bash
✅ Config loaded successfully
✅ Email config found
SMTP host: smtp.gmail.com
SMTP port: 587
From: noreply@99tales.com
From name: AlpineBits Monitor
Error alerts enabled: False
Error threshold: 5
Daily reports enabled: False
Send time: 08:00
✅ All schema validations passed!
```
### Email Service Initialization Test
```bash
✅ Config loaded and validated by schema
✅ Email service created successfully
SMTP: smtp.gmail.com:587
TLS: True
From: AlpineBits Monitor <noreply@99tales.com>
🎉 Email monitoring is ready to use!
```
### Unit Tests
```bash
============================= test session starts ==============================
tests/test_email_service.py::TestEmailConfig::test_email_config_initialization PASSED
tests/test_email_service.py::TestEmailConfig::test_email_config_defaults PASSED
tests/test_email_service.py::TestEmailConfig::test_email_config_tls_ssl_conflict PASSED
tests/test_email_service.py::TestEmailService::test_send_email_success PASSED
tests/test_email_service.py::TestEmailService::test_send_email_no_recipients PASSED
tests/test_email_service.py::TestEmailService::test_send_email_with_html PASSED
tests/test_email_service.py::TestEmailService::test_send_alert PASSED
tests/test_email_service.py::TestEmailService::test_send_daily_report PASSED
tests/test_email_service.py::TestErrorRecord::test_error_record_creation PASSED
tests/test_email_service.py::TestErrorRecord::test_error_record_to_dict PASSED
tests/test_email_service.py::TestErrorRecord::test_error_record_format_plain_text PASSED
tests/test_email_service.py::TestEmailAlertHandler::test_handler_initialization PASSED
tests/test_email_service.py::TestEmailAlertHandler::test_handler_emit_below_threshold PASSED
tests/test_email_service.py::TestEmailAlertHandler::test_handler_ignores_non_error_levels PASSED
tests/test_email_service.py::TestDailyReportScheduler::test_scheduler_initialization PASSED
tests/test_email_service.py::TestDailyReportScheduler::test_scheduler_log_error PASSED
tests/test_email_service.py::TestDailyReportScheduler::test_scheduler_set_stats_collector PASSED
================= 17 passed, 1 warning in 0.11s ==================
```
### Regression Tests
```bash
✅ All existing API tests still pass
✅ No breaking changes to existing functionality
```
## Usage
### To Enable Email Monitoring:
1. **Add SMTP credentials** to `config/secrets.yaml`:
```yaml
EMAIL_USERNAME: your-email@gmail.com
EMAIL_PASSWORD: your-app-password
```
2. **Enable features** in `config/config.yaml`:
```yaml
email:
monitoring:
error_alerts:
enabled: true # Enable error alerts
daily_report:
enabled: true # Enable daily reports
```
3. **Restart the server** - Email monitoring will start automatically
### To Test Email Monitoring:
```bash
# Run the interactive test suite
uv run python examples/test_email_monitoring.py
```
This will:
1. Send a test email
2. Trigger an error alert by exceeding the threshold
3. Trigger a buffered alert by waiting for buffer time
4. Send a test daily report
## Architecture Decisions
### Why Voluptuous Schema Validation?
The project already uses Voluptuous for config validation, so we:
- ✅ Maintained consistency with existing codebase
- ✅ Leveraged existing validation patterns
- ✅ Kept dependencies minimal (no new libraries needed)
- ✅ Ensured config errors are caught at startup, not runtime
### Why Hybrid Alert Strategy?
The hybrid approach (immediate + buffered) provides:
- ✅ **Fast response** for critical issues (5+ errors = immediate alert)
- ✅ **Spam prevention** for occasional errors (buffered alerts)
- ✅ **Cooldown period** prevents alert fatigue
- ✅ **Always sends** buffered errors (no minimum threshold for time-based flush)
### Why Custom Logging Handler?
Using a custom `logging.Handler` provides:
- ✅ **Zero code changes** - automatically captures all error logs
- ✅ **Clean separation** - monitoring logic separate from business logic
- ✅ **Standard pattern** - follows Python logging best practices
- ✅ **Easy to disable** - just remove handler from logger
## Files Changed/Created
### Created Files
- `src/alpine_bits_python/email_service.py` (new)
- `src/alpine_bits_python/email_monitoring.py` (new)
- `tests/test_email_service.py` (new)
- `examples/test_email_monitoring.py` (new)
- `docs/EMAIL_MONITORING.md` (new)
- `docs/EMAIL_MONITORING_QUICKSTART.md` (new)
- `docs/EMAIL_MONITORING_IMPLEMENTATION.md` (new)
- `config/.env.example` (new)
### Modified Files
- `src/alpine_bits_python/logging_config.py` - Added email handler integration
- `src/alpine_bits_python/api.py` - Added email service initialization
- `src/alpine_bits_python/config_loader.py` - **Added email config schema validation** ✅
- `config/config.yaml` - Added email configuration section
## Next Steps (Optional Enhancements)
Potential future improvements:
- [ ] Email templates with Jinja2
- [ ] Retry logic for failed email sends
- [ ] Integration with Slack, PagerDuty, Discord
- [ ] Weekly/monthly report options
- [ ] Custom alert rules based on error patterns
- [ ] Email queuing for high-volume scenarios
- [ ] Attachments support for detailed logs
- [ ] HTML email styling improvements
- [ ] Health check endpoint showing email status
## Conclusion
**Email monitoring system is complete and production-ready!**
The system provides:
- Robust SMTP email sending with TLS/SSL support
- Intelligent error alerting with hybrid threshold + time-based approach
- Scheduled daily reports with statistics and error summaries
- Comprehensive schema validation using Voluptuous
- Full test coverage with 17 passing tests
- Complete documentation and quick start guides
- Zero impact on existing functionality
**The system is ready to use!** Just configure SMTP credentials and enable the desired features.

View File

@@ -0,0 +1,177 @@
# Email Monitoring Quick Start
Get email notifications for errors and daily reports in 5 minutes.
## 1. Configure SMTP Settings
Edit `config/config.yaml` and add:
```yaml
email:
smtp:
host: "smtp.gmail.com"
port: 587
username: !secret EMAIL_USERNAME
password: !secret EMAIL_PASSWORD
use_tls: true
from_address: "noreply@yourdomain.com"
from_name: "AlpineBits Monitor"
```
## 2. Set Environment Variables
In the secrets.yaml file add the secrets
```yaml
EMAIL_USERNAME: "your_email_username"
EMAIL_PASSWORD: "your_email_password"
```
> **Note:** For Gmail, use an [App Password](https://support.google.com/accounts/answer/185833), not your regular password.
## 3. Enable Error Alerts
In `config/config.yaml`:
```yaml
email:
monitoring:
error_alerts:
enabled: true
recipients:
- "alerts@yourdomain.com"
error_threshold: 5
buffer_minutes: 15
cooldown_minutes: 15
```
**How it works:**
- Sends immediate alert after 5 errors
- Otherwise sends after 15 minutes
- Waits 15 minutes between alerts (cooldown)
## 4. Enable Daily Reports (Optional)
In `config/config.yaml`:
```yaml
email:
monitoring:
daily_report:
enabled: true
recipients:
- "admin@yourdomain.com"
send_time: "08:00"
include_stats: true
include_errors: true
```
## 5. Test Your Configuration
Run the test script:
```bash
uv run python examples/test_email_monitoring.py
```
This will:
- ✅ Send a test email
- ✅ Trigger an error alert
- ✅ Send a test daily report
## What You Get
### Error Alert Email
When errors occur, you'll receive:
```
🚨 AlpineBits Error Alert: 5 errors (threshold exceeded)
Error Count: 5
Time Range: 14:25:00 to 14:30:00
Errors:
----------------------------------------------------------------------
[2025-10-15 14:25:12] ERROR: Database connection timeout
Module: db:245
[2025-10-15 14:26:34] ERROR: Failed to process reservation
Module: api:567
Exception: ValueError: Invalid hotel code
```
### Daily Report Email
Every day at 8 AM, you'll receive:
```
📊 AlpineBits Daily Report - 2025-10-15
Statistics:
total_reservations: 42
new_customers: 15
active_hotels: 4
Errors (3):
[08:15:23] ERROR: Connection timeout
[12:45:10] ERROR: Invalid form data
[18:30:00] CRITICAL: Database unavailable
```
## Troubleshooting
### No emails received?
1. Check your SMTP credentials:
```bash
echo $EMAIL_USERNAME
echo $EMAIL_PASSWORD
```
2. Check application logs for errors:
```bash
tail -f alpinebits.log | grep -i email
```
3. Test SMTP connection manually:
```bash
uv run python -c "
import smtplib
with smtplib.SMTP('smtp.gmail.com', 587) as smtp:
smtp.starttls()
smtp.login('$EMAIL_USERNAME', '$EMAIL_PASSWORD')
print('✅ SMTP connection successful')
"
```
### Gmail authentication failed?
- Enable 2-factor authentication on your Google account
- Generate an App Password at https://myaccount.google.com/apppasswords
- Use the App Password (not your regular password)
### Too many emails?
- Increase `error_threshold` to only alert on serious issues
- Increase `buffer_minutes` to batch more errors together
- Increase `cooldown_minutes` to reduce alert frequency
## Next Steps
- Read the full [Email Monitoring Documentation](./EMAIL_MONITORING.md)
- Configure custom statistics for daily reports
- Set up multiple recipient groups
- Integrate with Slack or PagerDuty (coming soon)
## Support
For issues or questions:
- Check the [documentation](./EMAIL_MONITORING.md)
- Review [test examples](../examples/test_email_monitoring.py)
- Open an issue on GitHub

View File

@@ -0,0 +1,297 @@
# Multi-Worker Deployment Guide
## Problem Statement
When running FastAPI with multiple workers (e.g., `uvicorn app:app --workers 4`), the `lifespan` function runs in **every worker process**. This causes singleton services to run multiple times:
-**Email schedulers** send duplicate notifications (4x emails if 4 workers)
-**Background tasks** run redundantly across all workers
-**Database migrations/hashing** may cause race conditions
## Solution: File-Based Worker Coordination
We use **file-based locking** to ensure only ONE worker runs singleton services. This approach:
- ✅ Works across different process managers (uvicorn, gunicorn, systemd)
- ✅ No external dependencies (Redis, databases)
- ✅ Automatic failover (if primary worker crashes, another can acquire lock)
- ✅ Simple and reliable
## Implementation
### 1. Worker Coordination Module
The `worker_coordination.py` module provides:
```python
from alpine_bits_python.worker_coordination import is_primary_worker
# In your lifespan function
is_primary, worker_lock = is_primary_worker()
if is_primary:
# Start schedulers, background tasks, etc.
start_email_scheduler()
else:
# This is a secondary worker - skip singleton services
pass
```
### 2. How It Works
```
┌─────────────────────────────────────────────────────┐
│ uvicorn --workers 4 │
└─────────────────────────────────────────────────────┘
├─── Worker 0 (PID 1001) ─┐
├─── Worker 1 (PID 1002) ─┤
├─── Worker 2 (PID 1003) ─┤ All try to acquire
└─── Worker 3 (PID 1004) ─┘ /tmp/alpinebits_primary_worker.lock
Worker 0: ✓ Lock acquired → PRIMARY
Worker 1: ✗ Lock busy → SECONDARY
Worker 2: ✗ Lock busy → SECONDARY
Worker 3: ✗ Lock busy → SECONDARY
```
### 3. Lifespan Function
```python
async def lifespan(app: FastAPI):
# Determine primary worker using file lock
is_primary, worker_lock = is_primary_worker()
_LOGGER.info("Worker startup: pid=%d, primary=%s", os.getpid(), is_primary)
# All workers: shared setup
config = load_config()
engine = create_async_engine(DATABASE_URL)
# Only primary worker: singleton services
if is_primary:
# Start email scheduler
email_handler, report_scheduler = setup_logging(
config, email_service, loop, enable_scheduler=True
)
report_scheduler.start()
# Run database migrations/hashing
await hash_existing_customers()
else:
# Secondary workers: skip schedulers
email_handler, report_scheduler = setup_logging(
config, email_service, loop, enable_scheduler=False
)
yield
# Cleanup
if report_scheduler:
report_scheduler.stop()
# Release lock
if worker_lock:
worker_lock.release()
```
## Deployment Scenarios
### Development (Single Worker)
```bash
# No special configuration needed
uvicorn alpine_bits_python.api:app --reload
```
Result: Single worker becomes primary automatically.
### Production (Multiple Workers)
```bash
# 4 workers for handling concurrent requests
uvicorn alpine_bits_python.api:app --workers 4 --host 0.0.0.0 --port 8000
```
Result:
- Worker 0 becomes PRIMARY → runs schedulers
- Workers 1-3 are SECONDARY → handle requests only
### With Gunicorn
```bash
gunicorn alpine_bits_python.api:app \
--workers 4 \
--worker-class uvicorn.workers.UvicornWorker \
--bind 0.0.0.0:8000
```
Result: Same as uvicorn - one primary, rest secondary.
### Docker Compose
```yaml
services:
api:
image: alpinebits-api
command: uvicorn alpine_bits_python.api:app --workers 4 --host 0.0.0.0
volumes:
- /tmp:/tmp # Important: Share lock file location
```
**Important**: When using multiple containers, ensure they share the same lock file location or use Redis-based coordination instead.
## Monitoring & Debugging
### Check Which Worker is Primary
Look for log messages at startup:
```
Worker startup: pid=1001, primary=True
Worker startup: pid=1002, primary=False
Worker startup: pid=1003, primary=False
Worker startup: pid=1004, primary=False
```
### Check Lock File
```bash
# See which PID holds the lock
cat /tmp/alpinebits_primary_worker.lock
# Output: 1001
# Verify process is running
ps aux | grep 1001
```
### Testing Worker Coordination
Run the test script:
```bash
uv run python test_worker_coordination.py
```
Expected output:
```
Worker 0 (PID 30773): ✓ I am PRIMARY
Worker 1 (PID 30774): ✗ I am SECONDARY
Worker 2 (PID 30775): ✗ I am SECONDARY
Worker 3 (PID 30776): ✗ I am SECONDARY
```
## Failover Behavior
### Primary Worker Crashes
1. Primary worker holds lock
2. Primary worker crashes/exits → lock is automatically released by OS
3. Existing secondary workers remain secondary (they already failed to acquire lock)
4. **Next restart**: First worker to start becomes new primary
### Graceful Restart
1. Send SIGTERM to workers
2. Primary worker releases lock in shutdown
3. New workers start, one becomes primary
## Lock File Location
Default: `/tmp/alpinebits_primary_worker.lock`
### Change Lock Location
```python
from alpine_bits_python.worker_coordination import WorkerLock
# Custom location
lock = WorkerLock("/var/run/alpinebits/primary.lock")
is_primary = lock.acquire()
```
**Production recommendation**: Use `/var/run/` or `/run/` for lock files (automatically cleaned on reboot).
## Common Issues
### Issue: All workers think they're primary
**Cause**: Lock file path not accessible or workers running in separate containers.
**Solution**:
- Check file permissions on lock directory
- For containers: Use shared volume or Redis-based coordination
### Issue: No worker becomes primary
**Cause**: Lock file from previous run still exists.
**Solution**:
```bash
# Clean up stale lock file
rm /tmp/alpinebits_primary_worker.lock
# Restart application
```
### Issue: Duplicate emails still being sent
**Cause**: Email handler running on all workers (not just schedulers).
**Solution**: Email **alert handler** runs on all workers (to catch errors from any worker). Email **scheduler** only runs on primary. This is correct behavior - alerts come from any worker, scheduled reports only from primary.
## Alternative Approaches
### Redis-Based Coordination
For multi-container deployments, consider Redis-based locks:
```python
import redis
from redis.lock import Lock
redis_client = redis.Redis(host='redis', port=6379)
lock = Lock(redis_client, "alpinebits_primary_worker", timeout=60)
if lock.acquire(blocking=False):
# This is the primary worker
start_schedulers()
```
**Pros**: Works across containers
**Cons**: Requires Redis dependency
### Environment Variable (Not Recommended)
```bash
# Manually set primary worker
ALPINEBITS_PRIMARY_WORKER=true uvicorn app:app
```
**Pros**: Simple
**Cons**: Manual configuration, no automatic failover
## Best Practices
1.**Use file locks for single-host deployments** (our implementation)
2.**Use Redis locks for multi-container deployments**
3.**Log primary/secondary status at startup**
4.**Always release locks on shutdown**
5.**Keep lock files in `/var/run/` or `/tmp/`**
6.**Don't rely on process names** (unreliable with uvicorn)
7.**Don't use environment variables** (no automatic failover)
8.**Don't skip coordination** (will cause duplicate notifications)
## Summary
With file-based worker coordination:
- ✅ Only ONE worker runs singleton services (schedulers, migrations)
- ✅ All workers handle HTTP requests normally
- ✅ Automatic failover if primary worker crashes
- ✅ No external dependencies needed
- ✅ Works with uvicorn, gunicorn, and other ASGI servers
This ensures you get the benefits of multiple workers (concurrency) without duplicate email notifications or race conditions.

View File

@@ -0,0 +1,154 @@
╔══════════════════════════════════════════════════════════════════════════════╗
║ MULTI-WORKER FASTAPI ARCHITECTURE ║
╚══════════════════════════════════════════════════════════════════════════════╝
┌─────────────────────────────────────────────────────────────────────────────┐
│ Command: uvicorn alpine_bits_python.api:app --workers 4 │
└─────────────────────────────────────────────────────────────────────────────┘
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Master Process (uvicorn supervisor) ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
│ │ │ │
┌───────────┼──────────┼──────────┼──────────┼───────────┐
│ │ │ │ │ │
▼ ▼ ▼ ▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐ ┌──────────────────┐
│Worker 0│ │Worker 1│ │Worker 2│ │Worker 3│ │Lock File │
│PID:1001│ │PID:1002│ │PID:1003│ │PID:1004│ │/tmp/alpinebits │
└────┬───┘ └───┬────┘ └───┬────┘ └───┬────┘ │_primary_worker │
│ │ │ │ │.lock │
│ │ │ │ └──────────────────┘
│ │ │ │ ▲
│ │ │ │ │
└─────────┴──────────┴──────────┴─────────────┤
All try to acquire lock │
│ │
▼ │
┌───────────────────────┐ │
│ fcntl.flock(LOCK_EX) │────────────┘
│ Non-blocking attempt │
└───────────────────────┘
┏━━━━━━━━━━━━━━━━┻━━━━━━━━━━━━━━━━┓
▼ ▼
┌─────────┐ ┌──────────────┐
│SUCCESS │ │ WOULD BLOCK │
│(First) │ │(Others) │
└────┬────┘ └──────┬───────┘
│ │
▼ ▼
╔════════════════════════════════╗ ╔══════════════════════════════╗
║ PRIMARY WORKER ║ ║ SECONDARY WORKERS ║
║ (Worker 0, PID 1001) ║ ║ (Workers 1-3) ║
╠════════════════════════════════╣ ╠══════════════════════════════╣
║ ║ ║ ║
║ ✓ Handle HTTP requests ║ ║ ✓ Handle HTTP requests ║
║ ✓ Start email scheduler ║ ║ ✗ Skip email scheduler ║
║ ✓ Send daily reports ║ ║ ✗ Skip daily reports ║
║ ✓ Run DB migrations ║ ║ ✗ Skip DB migrations ║
║ ✓ Hash customers (startup) ║ ║ ✗ Skip customer hashing ║
║ ✓ Send error alerts ║ ║ ✓ Send error alerts ║
║ ✓ Process webhooks ║ ║ ✓ Process webhooks ║
║ ✓ AlpineBits endpoints ║ ║ ✓ AlpineBits endpoints ║
║ ║ ║ ║
║ Holds: worker_lock ║ ║ worker_lock = None ║
║ ║ ║ ║
╚════════════════════════════════╝ ╚══════════════════════════════╝
│ │
│ │
└──────────┬───────────────────────────┘
┌───────────────────────────┐
│ Incoming HTTP Request │
└───────────────────────────┘
(Load balanced by OS)
┌───────────┴──────────────┐
│ │
▼ ▼
Any worker can handle Round-robin distribution
the request normally across all 4 workers
╔══════════════════════════════════════════════════════════════════════════════╗
║ SINGLETON SERVICES ║
╚══════════════════════════════════════════════════════════════════════════════╝
Only run on PRIMARY worker:
┌─────────────────────────────────────────────────────────────┐
│ Email Scheduler │
│ ├─ Daily Report: 8:00 AM │
│ └─ Stats Collection: Per-hotel reservation counts │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Startup Tasks (One-time) │
│ ├─ Database table creation │
│ ├─ Customer data hashing/backfill │
│ └─ Configuration validation │
└─────────────────────────────────────────────────────────────┘
╔══════════════════════════════════════════════════════════════════════════════╗
║ SHARED SERVICES ║
╚══════════════════════════════════════════════════════════════════════════════╝
Run on ALL workers (primary + secondary):
┌─────────────────────────────────────────────────────────────┐
│ HTTP Request Handling │
│ ├─ Webhook endpoints (/api/webhook/*) │
│ ├─ AlpineBits endpoints (/api/alpinebits/*) │
│ └─ Health checks (/api/health) │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Error Alert Handler │
│ └─ Any worker can send immediate error alerts │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Event Dispatching │
│ └─ Background tasks triggered by webhooks │
└─────────────────────────────────────────────────────────────┘
╔══════════════════════════════════════════════════════════════════════════════╗
║ SHUTDOWN & FAILOVER ║
╚══════════════════════════════════════════════════════════════════════════════╝
Graceful Shutdown:
┌─────────────────────────────────────────────────────────────┐
│ 1. SIGTERM received │
│ 2. Stop scheduler (primary only) │
│ 3. Close email handler │
│ 4. Release worker_lock (primary only) │
│ 5. Dispose database engine │
└─────────────────────────────────────────────────────────────┘
Primary Worker Crash:
┌─────────────────────────────────────────────────────────────┐
│ 1. Primary worker crashes │
│ 2. OS automatically releases file lock │
│ 3. Secondary workers continue handling requests │
│ 4. On next restart, first worker becomes new primary │
└─────────────────────────────────────────────────────────────┘
╔══════════════════════════════════════════════════════════════════════════════╗
║ KEY BENEFITS ║
╚══════════════════════════════════════════════════════════════════════════════╝
✓ No duplicate email notifications
✓ No race conditions in database operations
✓ Automatic failover if primary crashes
✓ Load distribution for HTTP requests
✓ No external dependencies (Redis, etc.)
✓ Simple and reliable

View File

@@ -0,0 +1,305 @@
"""Example script to test email monitoring functionality.
This script demonstrates how to:
1. Configure the email service
2. Send test emails
3. Trigger error alerts
4. Test daily report generation
Usage:
uv run python examples/test_email_monitoring.py
"""
import asyncio
import logging
from datetime import datetime
from alpine_bits_python.config_loader import load_config
from alpine_bits_python.email_monitoring import (
DailyReportScheduler,
EmailAlertHandler,
)
from alpine_bits_python.email_service import create_email_service
from alpine_bits_python.logging_config import get_logger, setup_logging
_LOGGER = get_logger(__name__)
async def test_basic_email():
"""Test 1: Send a basic test email."""
print("\n" + "=" * 60)
print("Test 1: Basic Email Sending")
print("=" * 60)
config = load_config()
email_service = create_email_service(config)
if not email_service:
print("❌ Email service not configured. Check your config.yaml")
return False
print("✓ Email service initialized")
# Get the first recipient from error_alerts config
email_config = config.get("email", {})
monitoring_config = email_config.get("monitoring", {})
error_alerts_config = monitoring_config.get("error_alerts", {})
recipients = error_alerts_config.get("recipients", [])
if not recipients:
print("❌ No recipients configured in error_alerts")
return False
print(f"✓ Sending test email to: {recipients[0]}")
success = await email_service.send_email(
recipients=[recipients[0]],
subject="AlpineBits Email Test - Basic",
body=f"""This is a test email from the AlpineBits server.
Timestamp: {datetime.now().isoformat()}
Test: Basic email sending
If you received this email, your SMTP configuration is working correctly!
---
AlpineBits Python Server
Email Monitoring System
""",
)
if success:
print("✅ Test email sent successfully!")
return True
else:
print("❌ Failed to send test email. Check logs for details.")
return False
async def test_error_alert_threshold():
"""Test 2: Trigger immediate error alert by exceeding threshold."""
print("\n" + "=" * 60)
print("Test 2: Error Alert - Threshold Trigger")
print("=" * 60)
config = load_config()
email_service = create_email_service(config)
if not email_service:
print("❌ Email service not configured")
return False
# Setup logging with email monitoring
loop = asyncio.get_running_loop()
email_handler, _ = setup_logging(config, email_service, loop)
if not email_handler:
print("❌ Error alert handler not configured")
return False
print(f"✓ Error alert handler configured (threshold: {email_handler.error_threshold})")
print(f" Recipients: {email_handler.recipients}")
# Generate errors to exceed threshold
threshold = email_handler.error_threshold
print(f"\n📨 Generating {threshold} errors to trigger immediate alert...")
logger = logging.getLogger("test.error.threshold")
for i in range(threshold):
logger.error(f"Test error #{i + 1} - Threshold test at {datetime.now().isoformat()}")
print(f" → Error {i + 1}/{threshold} logged")
await asyncio.sleep(0.1) # Small delay between errors
# Wait a bit for email to be sent
print("\n⏳ Waiting for alert email to be sent...")
await asyncio.sleep(3)
print("✅ Threshold test complete! Check your email for the alert.")
return True
async def test_error_alert_buffer():
"""Test 3: Trigger buffered error alert by waiting for buffer time."""
print("\n" + "=" * 60)
print("Test 3: Error Alert - Buffer Time Trigger")
print("=" * 60)
config = load_config()
email_service = create_email_service(config)
if not email_service:
print("❌ Email service not configured")
return False
# Setup logging with email monitoring
loop = asyncio.get_running_loop()
email_handler, _ = setup_logging(config, email_service, loop)
if not email_handler:
print("❌ Error alert handler not configured")
return False
print(f"✓ Error alert handler configured (buffer: {email_handler.buffer_minutes} minutes)")
# Generate fewer errors than threshold
num_errors = max(1, email_handler.error_threshold - 2)
print(f"\n📨 Generating {num_errors} errors (below threshold)...")
logger = logging.getLogger("test.error.buffer")
for i in range(num_errors):
logger.error(f"Test error #{i + 1} - Buffer test at {datetime.now().isoformat()}")
print(f" → Error {i + 1}/{num_errors} logged")
buffer_seconds = email_handler.buffer_minutes * 60
print(f"\n⏳ Waiting {email_handler.buffer_minutes} minute(s) for buffer to flush...")
print(" (This will send an email with all buffered errors)")
# Wait for buffer time + a bit extra
await asyncio.sleep(buffer_seconds + 2)
print("✅ Buffer test complete! Check your email for the alert.")
return True
async def test_daily_report():
"""Test 4: Generate and send a test daily report."""
print("\n" + "=" * 60)
print("Test 4: Daily Report")
print("=" * 60)
config = load_config()
email_service = create_email_service(config)
if not email_service:
print("❌ Email service not configured")
return False
# Create a daily report scheduler
daily_report_config = (
config.get("email", {})
.get("monitoring", {})
.get("daily_report", {})
)
if not daily_report_config.get("enabled"):
print("⚠️ Daily reports not enabled in config")
print(" Set email.monitoring.daily_report.enabled = true")
return False
scheduler = DailyReportScheduler(email_service, daily_report_config)
print(f"✓ Daily report scheduler configured")
print(f" Recipients: {scheduler.recipients}")
print(f" Send time: {scheduler.send_time}")
# Add some test statistics
test_stats = {
"total_reservations": 42,
"new_customers": 15,
"active_hotels": 4,
"api_requests_today": 1234,
"average_response_time_ms": 45,
"success_rate": "99.2%",
}
# Add some test errors
test_errors = [
{
"timestamp": "2025-10-15 08:15:23",
"level": "ERROR",
"message": "Connection timeout to external API",
},
{
"timestamp": "2025-10-15 12:45:10",
"level": "ERROR",
"message": "Invalid form data submitted",
},
{
"timestamp": "2025-10-15 18:30:00",
"level": "CRITICAL",
"message": "Database connection pool exhausted",
},
]
print("\n📊 Sending test daily report...")
print(f" Stats: {len(test_stats)} metrics")
print(f" Errors: {len(test_errors)} entries")
success = await email_service.send_daily_report(
recipients=scheduler.recipients,
stats=test_stats,
errors=test_errors,
)
if success:
print("✅ Daily report sent successfully!")
return True
else:
print("❌ Failed to send daily report. Check logs for details.")
return False
async def run_all_tests():
"""Run all email monitoring tests."""
print("\n" + "=" * 60)
print("AlpineBits Email Monitoring Test Suite")
print("=" * 60)
tests = [
("Basic Email", test_basic_email),
("Error Alert (Threshold)", test_error_alert_threshold),
("Error Alert (Buffer)", test_error_alert_buffer),
("Daily Report", test_daily_report),
]
results = []
for test_name, test_func in tests:
try:
result = await test_func()
results.append((test_name, result))
except Exception as e:
print(f"\n❌ Test '{test_name}' failed with exception: {e}")
results.append((test_name, False))
# Wait between tests to avoid rate limiting
await asyncio.sleep(2)
# Print summary
print("\n" + "=" * 60)
print("Test Summary")
print("=" * 60)
passed = sum(1 for _, result in results if result)
total = len(results)
for test_name, result in results:
status = "✅ PASS" if result else "❌ FAIL"
print(f"{status}: {test_name}")
print(f"\nTotal: {passed}/{total} tests passed")
if passed == total:
print("\n🎉 All tests passed!")
else:
print(f"\n⚠️ {total - passed} test(s) failed")
def main():
"""Main entry point."""
print("Starting email monitoring tests...")
print("Make sure you have configured email settings in config.yaml")
print("and set EMAIL_USERNAME and EMAIL_PASSWORD environment variables.")
# Run the tests
try:
asyncio.run(run_all_tests())
except KeyboardInterrupt:
print("\n\n⚠️ Tests interrupted by user")
except Exception as e:
print(f"\n\n❌ Fatal error: {e}")
import traceback
traceback.print_exc()
if __name__ == "__main__":
main()

View File

@@ -12,10 +12,12 @@ dependencies = [
"aiosqlite>=0.21.0",
"annotatedyaml>=1.0.0",
"dotenv>=0.9.9",
"fast-langdetect>=1.0.0",
"fastapi>=0.117.1",
"generateds>=2.44.3",
"httpx>=0.28.1",
"lxml>=6.0.1",
"pushover-complete>=2.0.0",
"pydantic[email]>=2.11.9",
"pytest>=8.4.2",
"pytest-asyncio>=1.2.0",

View File

@@ -1,6 +1,7 @@
import asyncio
import gzip
import json
import multiprocessing
import os
import traceback
import urllib.parse
@@ -11,10 +12,17 @@ from pathlib import Path
from typing import Any
import httpx
from fast_langdetect import detect
from fastapi import APIRouter, Depends, FastAPI, HTTPException, Request
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import HTMLResponse, Response
from fastapi.security import HTTPBasic, HTTPBasicCredentials
from fastapi.security import (
HTTPAuthorizationCredentials,
HTTPBasic,
HTTPBasicCredentials,
HTTPBearer,
)
from pydantic import BaseModel
from slowapi.errors import RateLimitExceeded
from sqlalchemy.ext.asyncio import async_sessionmaker, create_async_engine
@@ -32,7 +40,12 @@ from .customer_service import CustomerService
from .db import Base, get_database_url
from .db import Customer as DBCustomer
from .db import Reservation as DBReservation
from .email_monitoring import ReservationStatsCollector
from .email_service import create_email_service
from .logging_config import get_logger, setup_logging
from .notification_adapters import EmailNotificationAdapter, PushoverNotificationAdapter
from .notification_service import NotificationService
from .pushover_service import create_pushover_service
from .rate_limit import (
BURST_RATE_LIMIT,
DEFAULT_RATE_LIMIT,
@@ -42,12 +55,28 @@ from .rate_limit import (
webhook_limiter,
)
from .reservation_service import ReservationService
from .worker_coordination import is_primary_worker
# Configure logging - will be reconfigured during lifespan with actual config
_LOGGER = get_logger(__name__)
# HTTP Basic auth for AlpineBits
security_basic = HTTPBasic()
# HTTP Bearer auth for API endpoints
security_bearer = HTTPBearer()
# Constants for token sanitization
TOKEN_LOG_LENGTH = 10
# Pydantic models for language detection
class LanguageDetectionRequest(BaseModel):
text: str
class LanguageDetectionResponse(BaseModel):
language_code: str
score: float
# --- Enhanced event dispatcher with hotel-specific routing ---
@@ -179,15 +208,39 @@ async def push_listener(customer: DBCustomer, reservation: DBReservation, hotel)
async def lifespan(app: FastAPI):
# Setup DB
# Determine if this is the primary worker using file-based locking
# Only primary runs schedulers/background tasks
# In multi-worker setups, only one worker should run singleton services
is_primary, worker_lock = is_primary_worker()
_LOGGER.info(
"Worker startup: process=%s, pid=%d, primary=%s",
multiprocessing.current_process().name,
os.getpid(),
is_primary,
)
try:
config = load_config()
except Exception:
_LOGGER.exception("Failed to load config: ")
config = {}
# Setup logging from config
setup_logging(config)
_LOGGER.info("Application startup initiated")
# Get event loop for email monitoring
loop = asyncio.get_running_loop()
# Initialize email service (before logging setup so it can be used by handlers)
email_service = create_email_service(config)
# Initialize pushover service
pushover_service = create_pushover_service(config)
# Setup logging from config with email and pushover monitoring
# Only primary worker should have the report scheduler running
email_handler, report_scheduler = setup_logging(
config, email_service, pushover_service, loop, enable_scheduler=is_primary
)
_LOGGER.info("Application startup initiated (primary_worker=%s)", is_primary)
DATABASE_URL = get_database_url(config)
engine = create_async_engine(DATABASE_URL, echo=False)
@@ -198,6 +251,10 @@ async def lifespan(app: FastAPI):
app.state.config = config
app.state.alpine_bits_server = AlpineBitsServer(config)
app.state.event_dispatcher = event_dispatcher
app.state.email_service = email_service
app.state.pushover_service = pushover_service
app.state.email_handler = email_handler
app.state.report_scheduler = report_scheduler
# Register push listeners for hotels with push_endpoint
for hotel in config.get("alpine_bits_auth", []):
@@ -224,7 +281,8 @@ async def lifespan(app: FastAPI):
await conn.run_sync(Base.metadata.create_all)
_LOGGER.info("Database tables checked/created at startup.")
# Hash any existing customers that don't have hashed versions yet
# Hash any existing customers (only in primary worker to avoid race conditions)
if is_primary:
async with AsyncSessionLocal() as session:
customer_service = CustomerService(session)
hashed_count = await customer_service.hash_existing_customers()
@@ -234,11 +292,91 @@ async def lifespan(app: FastAPI):
)
else:
_LOGGER.info("All existing customers already have hashed data")
else:
_LOGGER.info("Skipping customer hashing (non-primary worker)")
# Initialize and hook up stats collector for daily reports
# Note: report_scheduler will only exist on the primary worker
if report_scheduler:
stats_collector = ReservationStatsCollector(
async_sessionmaker=AsyncSessionLocal,
config=config,
)
# Hook up the stats collector to the report scheduler
report_scheduler.set_stats_collector(stats_collector.collect_stats)
_LOGGER.info("Stats collector initialized and hooked up to report scheduler")
# Send a test daily report on startup for testing (with 24-hour lookback)
_LOGGER.info("Sending test daily report on startup (last 24 hours)")
try:
# Use lookback_hours=24 to get stats from last 24 hours
stats = await stats_collector.collect_stats(lookback_hours=24)
# Send via email (if configured)
if email_service:
success = await email_service.send_daily_report(
recipients=report_scheduler.recipients,
stats=stats,
errors=None,
)
if success:
_LOGGER.info("Test daily report sent via email successfully on startup")
else:
_LOGGER.error("Failed to send test daily report via email on startup")
# Send via Pushover (if configured)
if pushover_service:
pushover_config = config.get("pushover", {})
pushover_monitoring = pushover_config.get("monitoring", {})
pushover_daily_report = pushover_monitoring.get("daily_report", {})
priority = pushover_daily_report.get("priority", 0)
success = await pushover_service.send_daily_report(
stats=stats,
errors=None,
priority=priority,
)
if success:
_LOGGER.info("Test daily report sent via Pushover successfully on startup")
else:
_LOGGER.error("Failed to send test daily report via Pushover on startup")
except Exception:
_LOGGER.exception("Error sending test daily report on startup")
# Start daily report scheduler
report_scheduler.start()
_LOGGER.info("Daily report scheduler started")
_LOGGER.info("Application startup complete")
yield
# Optional: Dispose engine on shutdown
# Cleanup on shutdown
_LOGGER.info("Application shutdown initiated")
# Stop daily report scheduler
if report_scheduler:
report_scheduler.stop()
_LOGGER.info("Daily report scheduler stopped")
# Close email alert handler (flush any remaining errors)
if email_handler:
email_handler.close()
_LOGGER.info("Email alert handler closed")
# Shutdown email service thread pool
if email_service:
email_service.shutdown()
_LOGGER.info("Email service shut down")
# Dispose engine
await engine.dispose()
_LOGGER.info("Application shutdown complete")
# Release worker lock if this was the primary worker
if worker_lock:
worker_lock.release()
async def get_async_session(request: Request):
@@ -307,6 +445,85 @@ async def health_check(request: Request):
}
@api_router.post("/detect-language", response_model=LanguageDetectionResponse)
@limiter.limit(DEFAULT_RATE_LIMIT)
async def detect_language(
request: Request,
data: LanguageDetectionRequest,
credentials: HTTPAuthorizationCredentials = Depends(security_bearer),
):
"""Detect language of text, restricted to Italian or German.
Requires Bearer token authentication.
Returns the most likely language (it or de) with confidence score.
"""
# Validate bearer token
token = credentials.credentials
config = request.app.state.config
# Check if token is valid
valid_tokens = config.get("api_tokens", [])
# If no tokens configured, reject authentication
if not valid_tokens:
_LOGGER.error("No api_tokens configured in config.yaml")
raise HTTPException(
status_code=401,
detail="Authentication token not configured on server",
)
if token not in valid_tokens:
# Log sanitized token (first TOKEN_LOG_LENGTH chars) for security
sanitized_token = (
token[:TOKEN_LOG_LENGTH] + "..." if len(token) > TOKEN_LOG_LENGTH else token
)
_LOGGER.warning("Invalid token attempt: %s", sanitized_token)
raise HTTPException(
status_code=401,
detail="Invalid authentication token",
)
try:
# Detect language with k=2 to get top 2 candidates
results = detect(data.text, k=2)
_LOGGER.info("Language detection results: %s", results)
# Filter for Italian (it) or German (de)
italian_german_results = [r for r in results if r.get("lang") in ["it", "de"]]
if italian_german_results:
# Return the best match between Italian and German
best_match = italian_german_results[0]
return_value = "Italienisch" if best_match["lang"] == "it" else "Deutsch"
return LanguageDetectionResponse(
language_code=return_value, score=best_match.get("score", 0.0)
)
# If neither Italian nor German detected in top 2, check all results
all_results = detect(data.text, k=10)
italian_german_all = [r for r in all_results if r.get("lang") in ["it", "de"]]
if italian_german_all:
best_match = italian_german_all[0]
return_value = "Italienisch" if best_match["lang"] == "it" else "Deutsch"
return LanguageDetectionResponse(
language_code=return_value, score=best_match.get("score", 0.0)
)
# Default to German if no clear detection
_LOGGER.warning(
"Could not detect Italian or German in text: %s, defaulting to 'de'",
data.text[:100],
)
return LanguageDetectionResponse(language_code="Deutsch", score=0.0)
except Exception as e:
_LOGGER.exception("Error detecting language: %s", e)
raise HTTPException(status_code=500, detail=f"Error detecting language: {e!s}")
# Extracted business logic for handling Wix form submissions
async def process_wix_form_submission(request: Request, data: dict[str, Any], db):
"""Shared business logic for handling Wix form submissions (test and production)."""
@@ -544,6 +761,9 @@ async def process_generic_webhook_submission(
name_prefix = form_data.get("anrede")
language = form_data.get("sprache", "de")[:2]
user_comment = form_data.get("nachricht", "")
plz = form_data.get("plz", "")
city = form_data.get("stadt", "")
country = form_data.get("land", "")
# Parse dates - handle DD.MM.YYYY format
start_date_str = form_data.get("anreise")
@@ -623,9 +843,9 @@ async def process_generic_webhook_submission(
"phone": phone_number if phone_number else None,
"email_newsletter": False,
"address_line": None,
"city_name": None,
"postal_code": None,
"country_code": None,
"city_name": city if city else None,
"postal_code": plz if plz else None,
"country_code": country if country else None,
"gender": None,
"birth_date": None,
"language": language,

View File

@@ -6,9 +6,12 @@ from annotatedyaml.loader import load_yaml as load_annotated_yaml
from voluptuous import (
PREVENT_EXTRA,
All,
Boolean,
In,
Length,
MultipleInvalid,
Optional,
Range,
Required,
Schema,
)
@@ -82,12 +85,122 @@ hotel_auth_schema = Schema(
basic_auth_schema = Schema(All([hotel_auth_schema], Length(min=1)))
# Email SMTP configuration schema
smtp_schema = Schema(
{
Required("host", default="localhost"): str,
Required("port", default=587): Range(min=1, max=65535),
Optional("username"): str,
Optional("password"): str,
Required("use_tls", default=True): Boolean(),
Required("use_ssl", default=False): Boolean(),
},
extra=PREVENT_EXTRA,
)
# Email daily report configuration schema
daily_report_schema = Schema(
{
Required("enabled", default=False): Boolean(),
Optional("recipients", default=[]): [str],
Required("send_time", default="08:00"): str,
Required("include_stats", default=True): Boolean(),
Required("include_errors", default=True): Boolean(),
},
extra=PREVENT_EXTRA,
)
# Email error alerts configuration schema
error_alerts_schema = Schema(
{
Required("enabled", default=False): Boolean(),
Optional("recipients", default=[]): [str],
Required("error_threshold", default=5): Range(min=1),
Required("buffer_minutes", default=15): Range(min=1),
Required("cooldown_minutes", default=15): Range(min=0),
Required("log_levels", default=["ERROR", "CRITICAL"]): [
In(["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"])
],
},
extra=PREVENT_EXTRA,
)
# Email monitoring configuration schema
monitoring_schema = Schema(
{
Optional("daily_report", default={}): daily_report_schema,
Optional("error_alerts", default={}): error_alerts_schema,
},
extra=PREVENT_EXTRA,
)
# Complete email configuration schema
email_schema = Schema(
{
Optional("smtp", default={}): smtp_schema,
Required("from_address", default="noreply@example.com"): str,
Required("from_name", default="AlpineBits Server"): str,
Optional("timeout", default=10): Range(min=1, max=300),
Optional("monitoring", default={}): monitoring_schema,
},
extra=PREVENT_EXTRA,
)
# Pushover daily report configuration schema
pushover_daily_report_schema = Schema(
{
Required("enabled", default=False): Boolean(),
Required("send_time", default="08:00"): str,
Required("include_stats", default=True): Boolean(),
Required("include_errors", default=True): Boolean(),
Required("priority", default=0): Range(min=-2, max=2), # Pushover priority levels
},
extra=PREVENT_EXTRA,
)
# Pushover error alerts configuration schema
pushover_error_alerts_schema = Schema(
{
Required("enabled", default=False): Boolean(),
Required("error_threshold", default=5): Range(min=1),
Required("buffer_minutes", default=15): Range(min=1),
Required("cooldown_minutes", default=15): Range(min=0),
Required("log_levels", default=["ERROR", "CRITICAL"]): [
In(["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"])
],
Required("priority", default=1): Range(min=-2, max=2), # Pushover priority levels
},
extra=PREVENT_EXTRA,
)
# Pushover monitoring configuration schema
pushover_monitoring_schema = Schema(
{
Optional("daily_report", default={}): pushover_daily_report_schema,
Optional("error_alerts", default={}): pushover_error_alerts_schema,
},
extra=PREVENT_EXTRA,
)
# Complete pushover configuration schema
pushover_schema = Schema(
{
Optional("user_key"): str, # Optional but required for pushover to work
Optional("api_token"): str, # Optional but required for pushover to work
Optional("monitoring", default={}): pushover_monitoring_schema,
},
extra=PREVENT_EXTRA,
)
config_schema = Schema(
{
Required(CONF_DATABASE): database_schema,
Required(CONF_ALPINE_BITS_AUTH): basic_auth_schema,
Required(CONF_SERVER): server_info,
Required(CONF_LOGGING): logger_schema,
Optional("email"): email_schema, # Email is optional
Optional("pushover"): pushover_schema, # Pushover is optional
Optional("api_tokens", default=[]): [str], # API tokens for bearer auth
},
extra=PREVENT_EXTRA,
)

View File

@@ -0,0 +1,571 @@
"""Email monitoring and alerting through logging integration.
This module provides a custom logging handler that accumulates errors and sends
email alerts based on configurable thresholds and time windows.
"""
import asyncio
import logging
import threading
from collections import defaultdict, deque
from datetime import datetime, timedelta
from typing import Any
from sqlalchemy import func, select
from sqlalchemy.ext.asyncio import async_sessionmaker
from .db import Reservation
from .email_service import EmailService
from .logging_config import get_logger
_LOGGER = get_logger(__name__)
class ErrorRecord:
"""Represents a single error log record for monitoring.
Attributes:
timestamp: When the error occurred
level: Log level (ERROR, CRITICAL, etc.)
logger_name: Name of the logger that generated the error
message: The error message
exception: Exception info if available
module: Module where error occurred
line_no: Line number where error occurred
"""
def __init__(self, record: logging.LogRecord):
"""Initialize from a logging.LogRecord.
Args:
record: The logging record to wrap
"""
self.timestamp = datetime.fromtimestamp(record.created)
self.level = record.levelname
self.logger_name = record.name
self.message = record.getMessage()
self.exception = record.exc_text if record.exc_info else None
self.module = record.module
self.line_no = record.lineno
self.pathname = record.pathname
def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary format.
Returns:
Dictionary representation of the error
"""
return {
"timestamp": self.timestamp.strftime("%Y-%m-%d %H:%M:%S"),
"level": self.level,
"logger_name": self.logger_name,
"message": self.message,
"exception": self.exception,
"module": self.module,
"line_no": self.line_no,
"pathname": self.pathname,
}
def format_plain_text(self) -> str:
"""Format error as plain text for email.
Returns:
Formatted plain text string
"""
text = f"[{self.timestamp.strftime('%Y-%m-%d %H:%M:%S')}] {self.level}: {self.message}\n"
text += f" Module: {self.module}:{self.line_no} ({self.logger_name})\n"
if self.exception:
text += f" Exception:\n{self.exception}\n"
return text
class EmailAlertHandler(logging.Handler):
"""Custom logging handler that sends email alerts for errors.
This handler uses a hybrid approach:
- Accumulates errors in a buffer
- Sends immediately if error threshold is reached
- Otherwise sends after buffer duration expires
- Always sends buffered errors (no minimum threshold for time-based flush)
- Implements cooldown to prevent alert spam
The handler is thread-safe and works with asyncio event loops.
"""
def __init__(
self,
email_service: EmailService,
config: dict[str, Any],
loop: asyncio.AbstractEventLoop | None = None,
):
"""Initialize the email alert handler.
Args:
email_service: Email service instance for sending alerts
config: Configuration dictionary for error alerts
loop: Asyncio event loop (will use current loop if not provided)
"""
super().__init__()
self.email_service = email_service
self.config = config
self.loop = loop # Will be set when first error occurs if not provided
# Configuration
self.recipients = config.get("recipients", [])
self.error_threshold = config.get("error_threshold", 5)
self.buffer_minutes = config.get("buffer_minutes", 15)
self.cooldown_minutes = config.get("cooldown_minutes", 15)
self.log_levels = config.get("log_levels", ["ERROR", "CRITICAL"])
# State
self.error_buffer: deque[ErrorRecord] = deque()
self.last_sent = datetime.min # Last time we sent an alert
self._flush_task: asyncio.Task | None = None
self._lock = threading.Lock() # Thread-safe for multi-threaded logging
_LOGGER.info(
"EmailAlertHandler initialized: threshold=%d, buffer=%dmin, cooldown=%dmin",
self.error_threshold,
self.buffer_minutes,
self.cooldown_minutes,
)
def emit(self, record: logging.LogRecord) -> None:
"""Handle a log record.
This is called automatically by the logging system when an error is logged.
It's important that this method is fast and doesn't block.
Args:
record: The log record to handle
"""
# Only handle configured log levels
if record.levelname not in self.log_levels:
return
try:
# Ensure we have an event loop
if self.loop is None:
try:
self.loop = asyncio.get_running_loop()
except RuntimeError:
# No running loop, we'll need to handle this differently
_LOGGER.warning("No asyncio event loop available for email alerts")
return
# Add error to buffer (thread-safe)
with self._lock:
error_record = ErrorRecord(record)
self.error_buffer.append(error_record)
buffer_size = len(self.error_buffer)
# Determine if we should send immediately
should_send_immediately = buffer_size >= self.error_threshold
if should_send_immediately:
# Cancel any pending flush task
if self._flush_task and not self._flush_task.done():
self._flush_task.cancel()
# Schedule immediate flush
self._flush_task = asyncio.run_coroutine_threadsafe(
self._flush_buffer(immediate=True),
self.loop,
)
# Schedule delayed flush if not already scheduled
elif not self._flush_task or self._flush_task.done():
self._flush_task = asyncio.run_coroutine_threadsafe(
self._schedule_delayed_flush(),
self.loop,
)
except Exception:
# Never let the handler crash - just log and continue
_LOGGER.exception("Error in EmailAlertHandler.emit")
async def _schedule_delayed_flush(self) -> None:
"""Schedule a delayed buffer flush after buffer duration."""
await asyncio.sleep(self.buffer_minutes * 60)
await self._flush_buffer(immediate=False)
async def _flush_buffer(self, *, immediate: bool) -> None:
"""Flush the error buffer and send email alert.
Args:
immediate: Whether this is an immediate flush (threshold hit)
"""
# Check cooldown period
now = datetime.now()
time_since_last = (now - self.last_sent).total_seconds() / 60
if time_since_last < self.cooldown_minutes:
_LOGGER.info(
"Alert cooldown active (%.1f min remaining), buffering errors",
self.cooldown_minutes - time_since_last,
)
# Don't clear buffer - let errors accumulate until cooldown expires
return
# Get all buffered errors (thread-safe)
with self._lock:
if not self.error_buffer:
return
errors = list(self.error_buffer)
self.error_buffer.clear()
# Update last sent time
self.last_sent = now
# Format email
error_count = len(errors)
time_range = (
f"{errors[0].timestamp.strftime('%H:%M:%S')} to "
f"{errors[-1].timestamp.strftime('%H:%M:%S')}"
)
# Determine alert type for subject
alert_type = "Immediate Alert" if immediate else "Scheduled Alert"
if immediate:
emoji = "🚨"
reason = f"(threshold of {self.error_threshold} exceeded)"
else:
emoji = "⚠️"
reason = f"({self.buffer_minutes} minute buffer)"
subject = (
f"{emoji} AlpineBits Error {alert_type}: {error_count} errors {reason}"
)
# Build plain text body
body = f"Error Alert - {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n"
body += "=" * 70 + "\n\n"
body += f"Alert Type: {alert_type}\n"
body += f"Error Count: {error_count}\n"
body += f"Time Range: {time_range}\n"
body += f"Reason: {reason}\n"
body += "\n" + "=" * 70 + "\n\n"
# Add individual errors
body += "Errors:\n"
body += "-" * 70 + "\n\n"
for error in errors:
body += error.format_plain_text()
body += "\n"
body += "-" * 70 + "\n"
body += f"Generated by AlpineBits Email Monitoring at {now.strftime('%Y-%m-%d %H:%M:%S')}\n"
# Send email
try:
success = await self.email_service.send_alert(
recipients=self.recipients,
subject=subject,
body=body,
)
if success:
_LOGGER.info(
"Email alert sent successfully: %d errors to %s",
error_count,
self.recipients,
)
else:
_LOGGER.error("Failed to send email alert for %d errors", error_count)
except Exception:
_LOGGER.exception("Exception while sending email alert")
def close(self) -> None:
"""Close the handler and flush any remaining errors.
This is called when the logging system shuts down.
"""
# Cancel any pending flush tasks
if self._flush_task and not self._flush_task.done():
self._flush_task.cancel()
# Flush any remaining errors immediately
if self.error_buffer and self.loop:
try:
# Check if the loop is still running
if not self.loop.is_closed():
future = asyncio.run_coroutine_threadsafe(
self._flush_buffer(immediate=False),
self.loop,
)
future.result(timeout=5)
else:
_LOGGER.warning(
"Event loop closed, cannot flush %d remaining errors",
len(self.error_buffer),
)
except Exception:
_LOGGER.exception("Error flushing buffer on close")
super().close()
class DailyReportScheduler:
"""Scheduler for sending daily reports at configured times.
This runs as a background task and sends daily reports containing
statistics and error summaries.
"""
def __init__(
self,
email_service: EmailService,
config: dict[str, Any],
):
"""Initialize the daily report scheduler.
Args:
email_service: Email service for sending reports
config: Configuration for daily reports
"""
self.email_service = email_service
self.config = config
self.recipients = config.get("recipients", [])
self.send_time = config.get("send_time", "08:00") # Default 8 AM
self.include_stats = config.get("include_stats", True)
self.include_errors = config.get("include_errors", True)
self._task: asyncio.Task | None = None
self._stats_collector = None # Will be set by application
self._error_log: list[dict[str, Any]] = []
_LOGGER.info(
"DailyReportScheduler initialized: send_time=%s, recipients=%s",
self.send_time,
self.recipients,
)
def start(self) -> None:
"""Start the daily report scheduler."""
if self._task is None or self._task.done():
self._task = asyncio.create_task(self._run())
_LOGGER.info("Daily report scheduler started")
def stop(self) -> None:
"""Stop the daily report scheduler."""
if self._task and not self._task.done():
self._task.cancel()
_LOGGER.info("Daily report scheduler stopped")
def log_error(self, error: dict[str, Any]) -> None:
"""Log an error for inclusion in daily report.
Args:
error: Error information dictionary
"""
self._error_log.append(error)
async def _run(self) -> None:
"""Run the daily report scheduler loop."""
while True:
try:
# Calculate time until next report
now = datetime.now()
target_hour, target_minute = map(int, self.send_time.split(":"))
# Calculate next send time
next_send = now.replace(
hour=target_hour,
minute=target_minute,
second=0,
microsecond=0,
)
# If time has passed today, schedule for tomorrow
if next_send <= now:
next_send += timedelta(days=1)
# Calculate sleep duration
sleep_seconds = (next_send - now).total_seconds()
_LOGGER.info(
"Next daily report scheduled for %s (in %.1f hours)",
next_send.strftime("%Y-%m-%d %H:%M:%S"),
sleep_seconds / 3600,
)
# Wait until send time
await asyncio.sleep(sleep_seconds)
# Send report
await self._send_report()
except asyncio.CancelledError:
_LOGGER.info("Daily report scheduler cancelled")
break
except Exception:
_LOGGER.exception("Error in daily report scheduler")
# Sleep a bit before retrying
await asyncio.sleep(60)
async def _send_report(self) -> None:
"""Send the daily report."""
stats = {}
# Collect statistics if enabled
if self.include_stats and self._stats_collector:
try:
stats = await self._stats_collector()
except Exception:
_LOGGER.exception("Error collecting statistics for daily report")
# Get errors if enabled
errors = self._error_log.copy() if self.include_errors else None
# Send report
try:
success = await self.email_service.send_daily_report(
recipients=self.recipients,
stats=stats,
errors=errors,
)
if success:
_LOGGER.info("Daily report sent successfully to %s", self.recipients)
# Clear error log after successful send
self._error_log.clear()
else:
_LOGGER.error("Failed to send daily report")
except Exception:
_LOGGER.exception("Exception while sending daily report")
def set_stats_collector(self, collector) -> None:
"""Set the statistics collector function.
Args:
collector: Async function that returns statistics dictionary
"""
self._stats_collector = collector
class ReservationStatsCollector:
"""Collects reservation statistics per hotel for daily reports.
This collector queries the database for reservations created since the last
report and aggregates them by hotel. It includes hotel_code and hotel_name
from the configuration.
"""
def __init__(
self,
async_sessionmaker: async_sessionmaker,
config: dict[str, Any],
):
"""Initialize the stats collector.
Args:
async_sessionmaker: SQLAlchemy async session maker
config: Application configuration containing hotel information
"""
self.async_sessionmaker = async_sessionmaker
self.config = config
self._last_report_time = datetime.now()
# Build hotel mapping from config
self._hotel_map = {}
for hotel in config.get("alpine_bits_auth", []):
hotel_id = hotel.get("hotel_id")
hotel_name = hotel.get("hotel_name")
if hotel_id:
self._hotel_map[hotel_id] = hotel_name or "Unknown Hotel"
_LOGGER.info(
"ReservationStatsCollector initialized with %d hotels",
len(self._hotel_map),
)
async def collect_stats(self, lookback_hours: int | None = None) -> dict[str, Any]:
"""Collect reservation statistics for the reporting period.
Args:
lookback_hours: Optional override to look back N hours from now.
If None, uses time since last report.
Returns:
Dictionary with statistics including reservations per hotel
"""
now = datetime.now()
if lookback_hours is not None:
# Override mode: look back N hours from now
period_start = now - timedelta(hours=lookback_hours)
period_end = now
else:
# Normal mode: since last report
period_start = self._last_report_time
period_end = now
_LOGGER.info(
"Collecting reservation stats from %s to %s",
period_start.strftime("%Y-%m-%d %H:%M:%S"),
period_end.strftime("%Y-%m-%d %H:%M:%S"),
)
async with self.async_sessionmaker() as session:
# Query reservations created in the reporting period
result = await session.execute(
select(Reservation.hotel_code, func.count(Reservation.id))
.where(Reservation.created_at >= period_start)
.where(Reservation.created_at < period_end)
.group_by(Reservation.hotel_code)
)
hotel_counts = dict(result.all())
# Build stats with hotel names from config
hotels_stats = []
total_reservations = 0
for hotel_code, count in hotel_counts.items():
hotel_name = self._hotel_map.get(hotel_code, "Unknown Hotel")
hotels_stats.append(
{
"hotel_code": hotel_code,
"hotel_name": hotel_name,
"reservations": count,
}
)
total_reservations += count
# Sort by reservation count descending
hotels_stats.sort(key=lambda x: x["reservations"], reverse=True)
# Update last report time only in normal mode (not lookback mode)
if lookback_hours is None:
self._last_report_time = now
stats = {
"reporting_period": {
"start": period_start.strftime("%Y-%m-%d %H:%M:%S"),
"end": period_end.strftime("%Y-%m-%d %H:%M:%S"),
},
"total_reservations": total_reservations,
"hotels": hotels_stats,
}
_LOGGER.info(
"Collected stats: %d total reservations across %d hotels",
total_reservations,
len(hotels_stats),
)
return stats

View File

@@ -0,0 +1,373 @@
"""Email service for sending alerts and reports.
This module provides email functionality for the AlpineBits application,
including error alerts and daily reports.
"""
import asyncio
import smtplib
import ssl
from concurrent.futures import ThreadPoolExecutor
from datetime import datetime
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from typing import Any
from pydantic import EmailStr, Field, field_validator
from .logging_config import get_logger
_LOGGER = get_logger(__name__)
class EmailConfig:
"""Configuration for email service.
Attributes:
smtp_host: SMTP server hostname
smtp_port: SMTP server port
smtp_username: SMTP authentication username
smtp_password: SMTP authentication password
use_tls: Use STARTTLS for encryption
use_ssl: Use SSL/TLS from the start
from_address: Sender email address
from_name: Sender display name
timeout: Connection timeout in seconds
"""
def __init__(self, config: dict[str, Any]):
"""Initialize email configuration from config dict.
Args:
config: Email configuration dictionary
"""
smtp_config = config.get("smtp", {})
self.smtp_host: str = smtp_config.get("host", "localhost")
self.smtp_port: int = smtp_config.get("port", 587)
self.smtp_username: str | None = smtp_config.get("username")
self.smtp_password: str | None = smtp_config.get("password")
self.use_tls: bool = smtp_config.get("use_tls", True)
self.use_ssl: bool = smtp_config.get("use_ssl", False)
self.from_address: str = config.get("from_address", "noreply@example.com")
self.from_name: str = config.get("from_name", "AlpineBits Server")
self.timeout: int = config.get("timeout", 10)
# Validate configuration
if self.use_tls and self.use_ssl:
msg = "Cannot use both TLS and SSL"
raise ValueError(msg)
class EmailService:
"""Service for sending emails via SMTP.
This service handles sending both plain text and HTML emails,
with support for TLS/SSL encryption and authentication.
"""
def __init__(self, config: EmailConfig):
"""Initialize email service.
Args:
config: Email configuration
"""
self.config = config
# Create dedicated thread pool for SMTP operations (max 2 threads is enough for email)
# This prevents issues with default executor in multi-process environments
self._executor = ThreadPoolExecutor(max_workers=2, thread_name_prefix="smtp-")
async def send_email(
self,
recipients: list[str],
subject: str,
body: str,
html_body: str | None = None,
) -> bool:
"""Send an email to recipients.
Args:
recipients: List of recipient email addresses
subject: Email subject line
body: Plain text email body
html_body: Optional HTML email body
Returns:
True if email was sent successfully, False otherwise
"""
if not recipients:
_LOGGER.warning("No recipients specified for email: %s", subject)
return False
try:
# Build message
msg = MIMEMultipart("alternative")
msg["Subject"] = subject
msg["From"] = f"{self.config.from_name} <{self.config.from_address}>"
msg["To"] = ", ".join(recipients)
msg["Date"] = datetime.now().strftime("%a, %d %b %Y %H:%M:%S %z")
# Attach plain text body
msg.attach(MIMEText(body, "plain"))
# Attach HTML body if provided
if html_body:
msg.attach(MIMEText(html_body, "html"))
# Send email in dedicated thread pool (SMTP is blocking)
loop = asyncio.get_event_loop()
await loop.run_in_executor(self._executor, self._send_smtp, msg, recipients)
_LOGGER.info("Email sent successfully to %s: %s", recipients, subject)
return True
except Exception:
_LOGGER.exception("Failed to send email to %s: %s", recipients, subject)
return False
def _send_smtp(self, msg: MIMEMultipart, recipients: list[str]) -> None:
"""Send email via SMTP (blocking operation).
Args:
msg: Email message to send
recipients: List of recipient addresses
Raises:
Exception: If email sending fails
"""
if self.config.use_ssl:
# Connect with SSL from the start
context = ssl.create_default_context()
with smtplib.SMTP_SSL(
self.config.smtp_host,
self.config.smtp_port,
timeout=self.config.timeout,
context=context,
) as server:
if self.config.smtp_username and self.config.smtp_password:
server.login(self.config.smtp_username, self.config.smtp_password)
server.send_message(msg, self.config.from_address, recipients)
else:
# Connect and optionally upgrade to TLS
with smtplib.SMTP(
self.config.smtp_host,
self.config.smtp_port,
timeout=self.config.timeout,
) as server:
if self.config.use_tls:
context = ssl.create_default_context()
server.starttls(context=context)
if self.config.smtp_username and self.config.smtp_password:
server.login(self.config.smtp_username, self.config.smtp_password)
server.send_message(msg, self.config.from_address, recipients)
async def send_alert(
self,
recipients: list[str],
subject: str,
body: str,
) -> bool:
"""Send an alert email (convenience method).
Args:
recipients: List of recipient email addresses
subject: Email subject line
body: Email body text
Returns:
True if email was sent successfully, False otherwise
"""
return await self.send_email(recipients, subject, body)
async def send_daily_report(
self,
recipients: list[str],
stats: dict[str, Any],
errors: list[dict[str, Any]] | None = None,
) -> bool:
"""Send a daily report email.
Args:
recipients: List of recipient email addresses
stats: Dictionary containing statistics to include in report
errors: Optional list of errors to include
Returns:
True if email was sent successfully, False otherwise
"""
date_str = datetime.now().strftime("%Y-%m-%d")
subject = f"AlpineBits Daily Report - {date_str}"
# Build plain text body
body = f"AlpineBits Daily Report for {date_str}\n"
body += "=" * 60 + "\n\n"
# Add statistics
if stats:
body += "Statistics:\n"
body += "-" * 60 + "\n"
for key, value in stats.items():
body += f" {key}: {value}\n"
body += "\n"
# Add errors if present
if errors:
body += f"Errors ({len(errors)}):\n"
body += "-" * 60 + "\n"
for error in errors[:20]: # Limit to 20 most recent errors
timestamp = error.get("timestamp", "Unknown")
level = error.get("level", "ERROR")
message = error.get("message", "No message")
body += f" [{timestamp}] {level}: {message}\n"
if len(errors) > 20:
body += f" ... and {len(errors) - 20} more errors\n"
body += "\n"
body += "-" * 60 + "\n"
body += "Generated by AlpineBits Server\n"
# Build HTML body for better formatting
html_body = self._build_daily_report_html(date_str, stats, errors)
return await self.send_email(recipients, subject, body, html_body)
def _build_daily_report_html(
self,
date_str: str,
stats: dict[str, Any],
errors: list[dict[str, Any]] | None,
) -> str:
"""Build HTML version of daily report.
Args:
date_str: Date string for the report
stats: Statistics dictionary
errors: Optional list of errors
Returns:
HTML string for the email body
"""
html = f"""
<html>
<head>
<style>
body {{ font-family: Arial, sans-serif; }}
h1 {{ color: #333; }}
h2 {{ color: #666; margin-top: 20px; }}
table {{ border-collapse: collapse; width: 100%; }}
th, td {{ text-align: left; padding: 8px; border-bottom: 1px solid #ddd; }}
th {{ background-color: #f2f2f2; }}
.error {{ color: #d32f2f; }}
.warning {{ color: #f57c00; }}
.footer {{ margin-top: 30px; color: #999; font-size: 12px; }}
</style>
</head>
<body>
<h1>AlpineBits Daily Report</h1>
<p><strong>Date:</strong> {date_str}</p>
"""
# Add statistics table
if stats:
html += """
<h2>Statistics</h2>
<table>
<tr>
<th>Metric</th>
<th>Value</th>
</tr>
"""
for key, value in stats.items():
html += f"""
<tr>
<td>{key}</td>
<td>{value}</td>
</tr>
"""
html += "</table>"
# Add errors table
if errors:
html += f"""
<h2>Errors ({len(errors)})</h2>
<table>
<tr>
<th>Time</th>
<th>Level</th>
<th>Message</th>
</tr>
"""
for error in errors[:20]: # Limit to 20 most recent
timestamp = error.get("timestamp", "Unknown")
level = error.get("level", "ERROR")
message = error.get("message", "No message")
css_class = "error" if level == "ERROR" or level == "CRITICAL" else "warning"
html += f"""
<tr>
<td>{timestamp}</td>
<td class="{css_class}">{level}</td>
<td>{message}</td>
</tr>
"""
if len(errors) > 20:
html += f"""
<tr>
<td colspan="3"><em>... and {len(errors) - 20} more errors</em></td>
</tr>
"""
html += "</table>"
html += """
<div class="footer">
<p>Generated by AlpineBits Server</p>
</div>
</body>
</html>
"""
return html
def shutdown(self) -> None:
"""Shutdown the email service and clean up thread pool.
This should be called during application shutdown to ensure
proper cleanup of the thread pool executor.
"""
if self._executor:
_LOGGER.info("Shutting down email service thread pool")
self._executor.shutdown(wait=True, cancel_futures=False)
_LOGGER.info("Email service thread pool shut down complete")
def create_email_service(config: dict[str, Any]) -> EmailService | None:
"""Create an email service from configuration.
Args:
config: Full application configuration dictionary
Returns:
EmailService instance if email is configured, None otherwise
"""
email_config = config.get("email")
if not email_config:
_LOGGER.info("Email not configured, email service disabled")
return None
try:
email_cfg = EmailConfig(email_config)
service = EmailService(email_cfg)
_LOGGER.info("Email service initialized: %s:%s", email_cfg.smtp_host, email_cfg.smtp_port)
return service
except Exception:
_LOGGER.exception("Failed to initialize email service")
return None

View File

@@ -4,16 +4,41 @@ This module sets up logging based on config and provides a function to get
loggers from anywhere in the application.
"""
import asyncio
import logging
import sys
from pathlib import Path
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from alpine_bits_python.email_monitoring import (
DailyReportScheduler,
EmailAlertHandler,
)
from alpine_bits_python.email_service import EmailService
from alpine_bits_python.pushover_service import PushoverService
def setup_logging(config: dict | None = None):
def setup_logging(
config: dict | None = None,
email_service: "EmailService | None" = None,
pushover_service: "PushoverService | None" = None,
loop: asyncio.AbstractEventLoop | None = None,
enable_scheduler: bool = True,
) -> tuple["EmailAlertHandler | None", "DailyReportScheduler | None"]:
"""Configure logging based on application config.
Args:
config: Application configuration dict with optional 'logger' section
email_service: Optional email service for email alerts
pushover_service: Optional pushover service for push notifications
loop: Optional asyncio event loop for email alerts
enable_scheduler: Whether to enable the daily report scheduler
(should be False for non-primary workers)
Returns:
Tuple of (email_alert_handler, daily_report_scheduler) if monitoring
is enabled, otherwise (None, None)
Logger config format:
logger:
@@ -67,6 +92,89 @@ def setup_logging(config: dict | None = None):
root_logger.info("Logging configured at %s level", level)
# Setup notification monitoring if configured
email_handler = None
report_scheduler = None
# Setup email monitoring if configured
if email_service:
email_config = config.get("email", {})
monitoring_config = email_config.get("monitoring", {})
# Setup error alert handler
error_alerts_config = monitoring_config.get("error_alerts", {})
if error_alerts_config.get("enabled", False):
try:
# Import here to avoid circular dependencies
from alpine_bits_python.email_monitoring import EmailAlertHandler
email_handler = EmailAlertHandler(
email_service=email_service,
config=error_alerts_config,
loop=loop,
)
email_handler.setLevel(logging.ERROR)
root_logger.addHandler(email_handler)
root_logger.info("Email alert handler enabled for error monitoring")
except Exception:
root_logger.exception("Failed to setup email alert handler")
# Setup daily report scheduler (only if enabled and this is primary worker)
daily_report_config = monitoring_config.get("daily_report", {})
if daily_report_config.get("enabled", False) and enable_scheduler:
try:
# Import here to avoid circular dependencies
from alpine_bits_python.email_monitoring import DailyReportScheduler
report_scheduler = DailyReportScheduler(
email_service=email_service,
config=daily_report_config,
)
root_logger.info("Daily report scheduler configured (primary worker)")
except Exception:
root_logger.exception("Failed to setup daily report scheduler")
elif daily_report_config.get("enabled", False) and not enable_scheduler:
root_logger.info(
"Daily report scheduler disabled (non-primary worker)"
)
# Check if Pushover daily reports are enabled
# If so and no report_scheduler exists yet, create one
if pushover_service and not report_scheduler:
pushover_config = config.get("pushover", {})
pushover_monitoring = pushover_config.get("monitoring", {})
pushover_daily_report = pushover_monitoring.get("daily_report", {})
if pushover_daily_report.get("enabled", False) and enable_scheduler:
try:
# Import here to avoid circular dependencies
from alpine_bits_python.email_monitoring import DailyReportScheduler
# Create a dummy config for the scheduler
# (it doesn't need email-specific fields if email is disabled)
scheduler_config = {
"send_time": pushover_daily_report.get("send_time", "08:00"),
"include_stats": pushover_daily_report.get("include_stats", True),
"include_errors": pushover_daily_report.get("include_errors", True),
"recipients": [], # Not used for Pushover
}
report_scheduler = DailyReportScheduler(
email_service=email_service, # Can be None
config=scheduler_config,
)
root_logger.info(
"Daily report scheduler configured for Pushover (primary worker)"
)
except Exception:
root_logger.exception("Failed to setup Pushover daily report scheduler")
elif pushover_daily_report.get("enabled", False) and not enable_scheduler:
root_logger.info(
"Pushover daily report scheduler disabled (non-primary worker)"
)
return email_handler, report_scheduler
def get_logger(name: str) -> logging.Logger:
"""Get a logger instance for the given module name.

View File

@@ -0,0 +1,127 @@
"""Adapters for notification backends.
This module provides adapters that wrap email and Pushover services
to work with the unified notification service interface.
"""
from typing import Any
from .email_service import EmailService
from .logging_config import get_logger
from .pushover_service import PushoverService
_LOGGER = get_logger(__name__)
class EmailNotificationAdapter:
"""Adapter for EmailService to work with NotificationService."""
def __init__(self, email_service: EmailService, recipients: list[str]):
"""Initialize the email notification adapter.
Args:
email_service: EmailService instance
recipients: List of recipient email addresses
"""
self.email_service = email_service
self.recipients = recipients
async def send_alert(self, title: str, message: str, **kwargs) -> bool:
"""Send an alert via email.
Args:
title: Email subject
message: Email body
**kwargs: Ignored for email
Returns:
True if sent successfully
"""
return await self.email_service.send_alert(
recipients=self.recipients,
subject=title,
body=message,
)
async def send_daily_report(
self,
stats: dict[str, Any],
errors: list[dict[str, Any]] | None = None,
**kwargs,
) -> bool:
"""Send a daily report via email.
Args:
stats: Statistics dictionary
errors: Optional list of errors
**kwargs: Ignored for email
Returns:
True if sent successfully
"""
return await self.email_service.send_daily_report(
recipients=self.recipients,
stats=stats,
errors=errors,
)
class PushoverNotificationAdapter:
"""Adapter for PushoverService to work with NotificationService."""
def __init__(self, pushover_service: PushoverService, priority: int = 0):
"""Initialize the Pushover notification adapter.
Args:
pushover_service: PushoverService instance
priority: Default priority level for notifications
"""
self.pushover_service = pushover_service
self.priority = priority
async def send_alert(self, title: str, message: str, **kwargs) -> bool:
"""Send an alert via Pushover.
Args:
title: Notification title
message: Notification message
**kwargs: Can include 'priority' to override default
Returns:
True if sent successfully
"""
priority = kwargs.get("priority", self.priority)
return await self.pushover_service.send_alert(
title=title,
message=message,
priority=priority,
)
async def send_daily_report(
self,
stats: dict[str, Any],
errors: list[dict[str, Any]] | None = None,
**kwargs,
) -> bool:
"""Send a daily report via Pushover.
Args:
stats: Statistics dictionary
errors: Optional list of errors
**kwargs: Can include 'priority' to override default
Returns:
True if sent successfully
"""
priority = kwargs.get("priority", self.priority)
return await self.pushover_service.send_daily_report(
stats=stats,
errors=errors,
priority=priority,
)

View File

@@ -0,0 +1,177 @@
"""Unified notification service supporting multiple backends.
This module provides a unified interface for sending notifications through
different channels (email, Pushover, etc.) for alerts and daily reports.
"""
from typing import Any, Protocol
from .logging_config import get_logger
_LOGGER = get_logger(__name__)
class NotificationBackend(Protocol):
"""Protocol for notification backends."""
async def send_alert(self, title: str, message: str, **kwargs) -> bool:
"""Send an alert notification.
Args:
title: Alert title/subject
message: Alert message/body
**kwargs: Backend-specific parameters
Returns:
True if sent successfully, False otherwise
"""
...
async def send_daily_report(
self,
stats: dict[str, Any],
errors: list[dict[str, Any]] | None = None,
**kwargs,
) -> bool:
"""Send a daily report notification.
Args:
stats: Statistics dictionary
errors: Optional list of errors
**kwargs: Backend-specific parameters
Returns:
True if sent successfully, False otherwise
"""
...
class NotificationService:
"""Unified notification service that supports multiple backends.
This service can send notifications through multiple channels simultaneously
(email, Pushover, etc.) based on configuration.
"""
def __init__(self):
"""Initialize the notification service."""
self.backends: dict[str, NotificationBackend] = {}
def register_backend(self, name: str, backend: NotificationBackend) -> None:
"""Register a notification backend.
Args:
name: Backend name (e.g., "email", "pushover")
backend: Backend instance implementing NotificationBackend protocol
"""
self.backends[name] = backend
_LOGGER.info("Registered notification backend: %s", name)
async def send_alert(
self,
title: str,
message: str,
backends: list[str] | None = None,
**kwargs,
) -> dict[str, bool]:
"""Send an alert through specified backends.
Args:
title: Alert title/subject
message: Alert message/body
backends: List of backend names to use (None = all registered)
**kwargs: Backend-specific parameters
Returns:
Dictionary mapping backend names to success status
"""
if backends is None:
backends = list(self.backends.keys())
results = {}
for backend_name in backends:
backend = self.backends.get(backend_name)
if backend is None:
_LOGGER.warning("Backend not found: %s", backend_name)
results[backend_name] = False
continue
try:
success = await backend.send_alert(title, message, **kwargs)
results[backend_name] = success
except Exception:
_LOGGER.exception(
"Error sending alert through backend %s", backend_name
)
results[backend_name] = False
return results
async def send_daily_report(
self,
stats: dict[str, Any],
errors: list[dict[str, Any]] | None = None,
backends: list[str] | None = None,
**kwargs,
) -> dict[str, bool]:
"""Send a daily report through specified backends.
Args:
stats: Statistics dictionary
errors: Optional list of errors
backends: List of backend names to use (None = all registered)
**kwargs: Backend-specific parameters
Returns:
Dictionary mapping backend names to success status
"""
if backends is None:
backends = list(self.backends.keys())
results = {}
for backend_name in backends:
backend = self.backends.get(backend_name)
if backend is None:
_LOGGER.warning("Backend not found: %s", backend_name)
results[backend_name] = False
continue
try:
success = await backend.send_daily_report(stats, errors, **kwargs)
results[backend_name] = success
except Exception:
_LOGGER.exception(
"Error sending daily report through backend %s", backend_name
)
results[backend_name] = False
return results
def get_backend(self, name: str) -> NotificationBackend | None:
"""Get a specific notification backend.
Args:
name: Backend name
Returns:
Backend instance or None if not found
"""
return self.backends.get(name)
def has_backend(self, name: str) -> bool:
"""Check if a backend is registered.
Args:
name: Backend name
Returns:
True if backend is registered
"""
return name in self.backends

View File

@@ -0,0 +1,281 @@
"""Pushover service for sending push notifications.
This module provides push notification functionality for the AlpineBits application,
including error alerts and daily reports via Pushover.
"""
import asyncio
from datetime import datetime
from typing import Any
from pushover_complete import PushoverAPI
from .logging_config import get_logger
_LOGGER = get_logger(__name__)
class PushoverConfig:
"""Configuration for Pushover service.
Attributes:
user_key: Pushover user/group key
api_token: Pushover application API token
"""
def __init__(self, config: dict[str, Any]):
"""Initialize Pushover configuration from config dict.
Args:
config: Pushover configuration dictionary
"""
self.user_key: str | None = config.get("user_key")
self.api_token: str | None = config.get("api_token")
# Validate configuration
if not self.user_key or not self.api_token:
msg = "Both user_key and api_token are required for Pushover"
raise ValueError(msg)
class PushoverService:
"""Service for sending push notifications via Pushover.
This service handles sending notifications through the Pushover API,
including alerts and daily reports.
"""
def __init__(self, config: PushoverConfig):
"""Initialize Pushover service.
Args:
config: Pushover configuration
"""
self.config = config
self.api = PushoverAPI(config.api_token)
async def send_notification(
self,
title: str,
message: str,
priority: int = 0,
url: str | None = None,
url_title: str | None = None,
) -> bool:
"""Send a push notification via Pushover.
Args:
title: Notification title
message: Notification message
priority: Priority level (-2 to 2, default 0)
url: Optional supplementary URL
url_title: Optional title for the URL
Returns:
True if notification was sent successfully, False otherwise
"""
try:
# Send notification in thread pool (API is blocking)
loop = asyncio.get_event_loop()
await loop.run_in_executor(
None,
self._send_pushover,
title,
message,
priority,
url,
url_title,
)
_LOGGER.info("Pushover notification sent successfully: %s", title)
return True
except Exception:
_LOGGER.exception("Failed to send Pushover notification: %s", title)
return False
def _send_pushover(
self,
title: str,
message: str,
priority: int,
url: str | None,
url_title: str | None,
) -> None:
"""Send notification via Pushover (blocking operation).
Args:
title: Notification title
message: Notification message
priority: Priority level
url: Optional URL
url_title: Optional URL title
Raises:
Exception: If notification sending fails
"""
kwargs = {
"user": self.config.user_key,
"title": title,
"message": message,
"priority": priority,
}
if url:
kwargs["url"] = url
if url_title:
kwargs["url_title"] = url_title
self.api.send_message(**kwargs)
async def send_alert(
self,
title: str,
message: str,
priority: int = 1,
) -> bool:
"""Send an alert notification (convenience method).
Args:
title: Alert title
message: Alert message
priority: Priority level (default 1 for high priority)
Returns:
True if notification was sent successfully, False otherwise
"""
return await self.send_notification(title, message, priority=priority)
async def send_daily_report(
self,
stats: dict[str, Any],
errors: list[dict[str, Any]] | None = None,
priority: int = 0,
) -> bool:
"""Send a daily report notification.
Args:
stats: Dictionary containing statistics to include in report
errors: Optional list of errors to include
priority: Priority level (default 0 for normal)
Returns:
True if notification was sent successfully, False otherwise
"""
date_str = datetime.now().strftime("%Y-%m-%d")
title = f"AlpineBits Daily Report - {date_str}"
# Build message body (Pushover has a 1024 character limit)
message = self._build_daily_report_message(date_str, stats, errors)
return await self.send_notification(title, message, priority=priority)
def _build_daily_report_message(
self,
date_str: str,
stats: dict[str, Any],
errors: list[dict[str, Any]] | None,
) -> str:
"""Build daily report message for Pushover.
Args:
date_str: Date string for the report
stats: Statistics dictionary
errors: Optional list of errors
Returns:
Formatted message string (max 1024 chars for Pushover)
"""
lines = [f"Report for {date_str}", ""]
# Add statistics (simplified for push notification)
if stats:
# Handle reporting period
period = stats.get("reporting_period", {})
if period:
start = period.get("start", "")
end = period.get("end", "")
if start and end:
# Parse the datetime strings to check if they're on different days
if " " in start and " " in end:
start_date, start_time = start.split(" ")
end_date, end_time = end.split(" ")
# If same day, just show times
if start_date == end_date:
lines.append(f"Period: {start_time} - {end_time}")
else:
# Different days, show date + time in compact format
# Format: "MM-DD HH:MM - MM-DD HH:MM"
start_compact = f"{start_date[5:]} {start_time[:5]}"
end_compact = f"{end_date[5:]} {end_time[:5]}"
lines.append(f"Period: {start_compact} - {end_compact}")
else:
# Fallback if format is unexpected
lines.append(f"Period: {start} - {end}")
# Total reservations
total = stats.get("total_reservations", 0)
lines.append(f"Total Reservations: {total}")
# Per-hotel breakdown (top 5 only to save space)
hotels = stats.get("hotels", [])
if hotels:
lines.append("")
lines.append("By Hotel:")
for hotel in hotels[:5]: # Top 5 hotels
hotel_name = hotel.get("hotel_name", "Unknown")
count = hotel.get("reservations", 0)
# Truncate long hotel names
if len(hotel_name) > 20:
hotel_name = hotel_name[:17] + "..."
lines.append(f"{hotel_name}: {count}")
if len(hotels) > 5:
lines.append(f" • ... and {len(hotels) - 5} more")
# Add error summary if present
if errors:
lines.append("")
lines.append(f"Errors: {len(errors)} (see logs)")
message = "\n".join(lines)
# Truncate if too long (Pushover limit is 1024 chars)
if len(message) > 1020:
message = message[:1017] + "..."
return message
def create_pushover_service(config: dict[str, Any]) -> PushoverService | None:
"""Create a Pushover service from configuration.
Args:
config: Full application configuration dictionary
Returns:
PushoverService instance if Pushover is configured, None otherwise
"""
pushover_config = config.get("pushover")
if not pushover_config:
_LOGGER.info("Pushover not configured, push notification service disabled")
return None
try:
pushover_cfg = PushoverConfig(pushover_config)
service = PushoverService(pushover_cfg)
_LOGGER.info("Pushover service initialized successfully")
return service
except Exception:
_LOGGER.exception("Failed to initialize Pushover service")
return None

View File

@@ -0,0 +1,119 @@
"""Worker coordination utilities for multi-worker FastAPI deployments.
This module provides utilities to ensure singleton services (schedulers, background tasks)
run on only one worker when using uvicorn --workers N.
"""
import fcntl
import os
from pathlib import Path
from typing import ContextManager
from .logging_config import get_logger
_LOGGER = get_logger(__name__)
class WorkerLock:
"""File-based lock to coordinate worker processes.
Only one worker can hold the lock at a time. This ensures singleton
services like schedulers only run on one worker.
"""
def __init__(self, lock_file: str = "/tmp/alpinebits_primary_worker.lock"):
"""Initialize the worker lock.
Args:
lock_file: Path to the lock file
"""
self.lock_file = Path(lock_file)
self.lock_fd = None
self.is_primary = False
def acquire(self) -> bool:
"""Try to acquire the primary worker lock.
Returns:
True if lock was acquired (this is the primary worker)
False if lock is held by another worker
"""
try:
# Create lock file if it doesn't exist
self.lock_file.parent.mkdir(parents=True, exist_ok=True)
# Open lock file
self.lock_fd = open(self.lock_file, "w")
# Try to acquire exclusive lock (non-blocking)
fcntl.flock(self.lock_fd.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
# Write PID to lock file for debugging
self.lock_fd.write(f"{os.getpid()}\n")
self.lock_fd.flush()
self.is_primary = True
_LOGGER.info(
"Acquired primary worker lock (pid=%d, lock_file=%s)",
os.getpid(),
self.lock_file,
)
return True
except (IOError, OSError) as e:
# Lock is held by another process
if self.lock_fd:
self.lock_fd.close()
self.lock_fd = None
self.is_primary = False
_LOGGER.info(
"Could not acquire primary worker lock - another worker is primary (pid=%d)",
os.getpid(),
)
return False
def release(self) -> None:
"""Release the primary worker lock."""
if self.lock_fd and self.is_primary:
try:
fcntl.flock(self.lock_fd.fileno(), fcntl.LOCK_UN)
self.lock_fd.close()
# Try to remove lock file (best effort)
try:
self.lock_file.unlink()
except Exception:
pass
_LOGGER.info("Released primary worker lock (pid=%d)", os.getpid())
except Exception:
_LOGGER.exception("Error releasing primary worker lock")
finally:
self.lock_fd = None
self.is_primary = False
def __enter__(self) -> "WorkerLock":
"""Context manager entry."""
self.acquire()
return self
def __exit__(self, exc_type, exc_val, exc_tb) -> None:
"""Context manager exit."""
self.release()
def is_primary_worker() -> tuple[bool, WorkerLock | None]:
"""Determine if this worker should run singleton services.
Uses file-based locking to coordinate between workers.
Returns:
Tuple of (is_primary, lock_object)
- is_primary: True if this is the primary worker
- lock_object: WorkerLock instance (must be kept alive)
"""
lock = WorkerLock()
is_primary = lock.acquire()
return is_primary, lock

52
test_langdetect.py Normal file
View File

@@ -0,0 +1,52 @@
#!/usr/bin/env python3
"""
Test script for fast-langdetect library
Tests language detection on various sample texts
"""
from fast_langdetect import detect
# Test strings in different languages
test_strings = [
("Hello, how are you doing today?", "English"),
("Bonjour, comment allez-vous aujourd'hui?", "French"),
("Hola, ¿cómo estás hoy?", "Spanish"),
(
"Hallo, ich würde diese Wohnung gerne am 22.10.25 am späten Nachmittag besichtigen. Wir reisen aus Berlin an. Mit freundlichen Grüßen Dr. Christoph Garcia Bartels",
"German",
),
("Ciao, come stai oggi?", "Italian"),
("Olá, como você está hoje?", "Portuguese"),
("Привет, как дела сегодня?", "Russian"),
("こんにちは、今日はお元気ですか?", "Japanese"),
("你好,你今天怎么样?", "Chinese"),
(
"Ciao, questo appartamento mi interessa e mi piacerebbe visitarlo. Grazie",
"Italian",
),
("مرحبا، كيف حالك اليوم؟", "Arabic"),
("Hej, hur mår du idag?", "Swedish"),
(
"Guten Tag! Koennte ich diese Wohnun bitte besichtigen kommen? Vielleicht sogar schon morgen, Mittwoch den 15.10.? Ich waere sowieso im Unterland und koennte gegen 12 Uhr dort sein. Danke fuer eine kurze Rueckmeldung diesbezueglich, Catherina",
"German",
),
("Witam, jak się dzisiaj masz?", "Polish"),
]
def main():
print("Testing fast-langdetect library")
print("=" * 60)
print()
for text, expected_lang in test_strings:
detected = detect(text)
print(f"Text: {text[:50]}...")
print(f"Expected: {expected_lang}")
print(f"Detected: {detected}")
print("-" * 60)
print()
if __name__ == "__main__":
main()

322
test_pushover.py Normal file
View File

@@ -0,0 +1,322 @@
#!/usr/bin/env python3
"""Test script to verify Pushover push notification connectivity.
This script tests Pushover API connectivity and sends test notifications
to help verify that the configuration is correct.
"""
import sys
import time
from datetime import datetime
# Load configuration from config.yaml
try:
from alpine_bits_python.config_loader import load_config
from pushover_complete import PushoverAPI
print("Loading configuration from config.yaml...")
config = load_config()
pushover_config = config.get("pushover", {})
USER_KEY = pushover_config.get("user_key", "")
API_TOKEN = pushover_config.get("api_token", "")
# Get monitoring configuration
monitoring_config = pushover_config.get("monitoring", {})
daily_report_config = monitoring_config.get("daily_report", {})
error_alerts_config = monitoring_config.get("error_alerts", {})
DAILY_REPORT_ENABLED = daily_report_config.get("enabled", False)
DAILY_REPORT_PRIORITY = daily_report_config.get("priority", 0)
ERROR_ALERTS_ENABLED = error_alerts_config.get("enabled", False)
ERROR_ALERTS_PRIORITY = error_alerts_config.get("priority", 1)
print(f"✓ Configuration loaded successfully")
print(f" User Key: {USER_KEY[:10]}... (hidden)" if USER_KEY else " User Key: (not set)")
print(f" API Token: {API_TOKEN[:10]}... (hidden)" if API_TOKEN else " API Token: (not set)")
print(f" Daily Reports: {'Enabled' if DAILY_REPORT_ENABLED else 'Disabled'} (priority: {DAILY_REPORT_PRIORITY})")
print(f" Error Alerts: {'Enabled' if ERROR_ALERTS_ENABLED else 'Disabled'} (priority: {ERROR_ALERTS_PRIORITY})")
print()
if not USER_KEY or not API_TOKEN:
print("✗ Pushover credentials not configured!")
print()
print("Please add the following to your secrets.yaml:")
print(" PUSHOVER_USER_KEY: your-user-key-here")
print(" PUSHOVER_API_TOKEN: your-app-token-here")
print()
print("Get your credentials from https://pushover.net")
sys.exit(1)
except Exception as e:
print(f"✗ Failed to load configuration: {e}")
print()
print("Make sure you have:")
print("1. config.yaml with pushover section")
print("2. secrets.yaml with PUSHOVER_USER_KEY and PUSHOVER_API_TOKEN")
sys.exit(1)
def test_simple_notification() -> bool:
"""Test sending a simple notification."""
print("Test 1: Sending simple test notification...")
try:
api = PushoverAPI(API_TOKEN)
api.send_message(
user=USER_KEY,
title="Pushover Test",
message=f"Test notification from AlpineBits server at {datetime.now().strftime('%H:%M:%S')}",
)
print("✓ Simple notification sent successfully")
print(" Check your Pushover device for the notification!")
return True
except Exception as e:
print(f"✗ Failed to send notification: {e}")
return False
def test_priority_levels() -> bool:
"""Test different priority levels."""
print("\nTest 2: Testing priority levels...")
priorities = [
(-2, "Lowest", "No alert, quiet notification"),
(-1, "Low", "No alert"),
(0, "Normal", "Standard notification"),
(1, "High", "Bypasses quiet hours"),
]
success_count = 0
for i, (priority, name, description) in enumerate(priorities):
try:
api = PushoverAPI(API_TOKEN)
api.send_message(
user=USER_KEY,
title=f"Priority Test: {name}",
message=f"Testing priority {priority} - {description}",
priority=priority,
)
print(f"✓ Sent notification with priority {priority} ({name})")
success_count += 1
# Add delay between notifications to avoid rate limiting (except after last one)
if i < len(priorities) - 1:
time.sleep(1)
except Exception as e:
print(f"✗ Failed to send priority {priority} notification: {e}")
print(f" {success_count}/{len(priorities)} priority notifications sent")
return success_count == len(priorities)
def test_daily_report_format() -> bool:
"""Test sending a message formatted like a daily report."""
print("\nTest 3: Testing daily report format...")
# Sample stats similar to what the app would generate
date_str = datetime.now().strftime("%Y-%m-%d")
stats = {
"reporting_period": {
"start": "2025-10-15 08:00:00",
"end": "2025-10-16 08:00:00",
},
"total_reservations": 12,
"hotels": [
{"hotel_name": "Bemelmans Post", "reservations": 5},
{"hotel_name": "Jagthof Kaltern", "reservations": 4},
{"hotel_name": "Residence Erika", "reservations": 3},
],
}
# Build message similar to pushover_service.py
lines = [f"Report for {date_str}", ""]
period = stats.get("reporting_period", {})
if period:
start = period.get("start", "").split(" ")[1] if " " in period.get("start", "") else ""
end = period.get("end", "").split(" ")[1] if " " in period.get("end", "") else ""
if start and end:
lines.append(f"Period: {start} - {end}")
total = stats.get("total_reservations", 0)
lines.append(f"Total Reservations: {total}")
hotels = stats.get("hotels", [])
if hotels:
lines.append("")
lines.append("By Hotel:")
for hotel in hotels[:5]:
hotel_name = hotel.get("hotel_name", "Unknown")
count = hotel.get("reservations", 0)
if len(hotel_name) > 20:
hotel_name = hotel_name[:17] + "..."
lines.append(f"{hotel_name}: {count}")
message = "\n".join(lines)
try:
api = PushoverAPI(API_TOKEN)
api.send_message(
user=USER_KEY,
title=f"AlpineBits Daily Report - {date_str}",
message=message,
priority=DAILY_REPORT_PRIORITY,
)
print("✓ Daily report format notification sent successfully")
print(f" Message preview:\n{message}")
return True
except Exception as e:
print(f"✗ Failed to send daily report notification: {e}")
return False
def test_error_alert_format() -> bool:
"""Test sending a message formatted like an error alert."""
print("\nTest 4: Testing error alert format...")
error_count = 3
title = f"🚨 AlpineBits Error Alert: {error_count} errors"
message = f"""Error Alert - {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
Alert Type: Test Alert
Error Count: {error_count}
Time Range: 14:30:00 to 14:45:00
Sample errors (see logs for details):
1. Database connection timeout
2. SMTP connection failed
3. API rate limit exceeded"""
try:
api = PushoverAPI(API_TOKEN)
api.send_message(
user=USER_KEY,
title=title,
message=message,
priority=ERROR_ALERTS_PRIORITY,
)
print("✓ Error alert format notification sent successfully")
return True
except Exception as e:
print(f"✗ Failed to send error alert notification: {e}")
return False
def test_with_url() -> bool:
"""Test notification with supplementary URL."""
print("\nTest 5: Testing notification with URL...")
try:
api = PushoverAPI(API_TOKEN)
api.send_message(
user=USER_KEY,
title="AlpineBits Server",
message="This notification includes a supplementary URL. Tap to open.",
url="https://github.com/anthropics/claude-code",
url_title="View on GitHub",
)
print("✓ Notification with URL sent successfully")
return True
except Exception as e:
print(f"✗ Failed to send notification with URL: {e}")
return False
def validate_credentials() -> bool:
"""Validate Pushover credentials."""
print("Test 0: Validating Pushover credentials...")
try:
api = PushoverAPI(API_TOKEN)
# Try to validate the user key
response = api.send_message(
user=USER_KEY,
title="Credential Validation",
message="If you receive this, your Pushover credentials are valid!",
)
print("✓ Credentials validated successfully")
return True
except Exception as e:
print(f"✗ Credential validation failed: {e}")
print()
print("Possible issues:")
print("1. Invalid API token (check your application settings)")
print("2. Invalid user key (check your user dashboard)")
print("3. Network connectivity issues")
print("4. Pushover service is down")
return False
def main():
"""Run all Pushover tests."""
print("=" * 70)
print("Pushover Push Notification Test Script")
print("=" * 70)
print()
# First validate credentials
if not validate_credentials():
print("\n" + "=" * 70)
print("FAILED: Cannot proceed without valid credentials")
print("=" * 70)
return 1
print()
# Run all tests
tests = [
("Simple Notification", test_simple_notification),
("Priority Levels", test_priority_levels),
("Daily Report Format", test_daily_report_format),
("Error Alert Format", test_error_alert_format),
("Notification with URL", test_with_url),
]
results = []
for i, (test_name, test_func) in enumerate(tests):
try:
result = test_func()
results.append((test_name, result))
# Add delay between tests to avoid rate limiting (except after last one)
if i < len(tests) - 1:
print(" (Waiting 1 second to avoid rate limiting...)")
time.sleep(1)
except Exception as e:
print(f"✗ Test '{test_name}' crashed: {e}")
results.append((test_name, False))
# Print summary
print("\n" + "=" * 70)
print("TEST SUMMARY")
print("=" * 70)
passed = sum(1 for _, result in results if result)
total = len(results)
for test_name, result in results:
status = "✓ PASS" if result else "✗ FAIL"
print(f" {status}: {test_name}")
print()
print(f"Results: {passed}/{total} tests passed")
if passed == total:
print("\n✓ ALL TESTS PASSED!")
print("=" * 70)
print("\nYour Pushover configuration is working correctly.")
print("Check your Pushover device for all the test notifications.")
return 0
else:
print(f"\n{total - passed} TEST(S) FAILED")
print("=" * 70)
print("\nSome tests failed. Check the output above for details.")
return 1
if __name__ == "__main__":
try:
sys.exit(main())
except KeyboardInterrupt:
print("\n\nTest cancelled by user")
sys.exit(1)

294
test_smtp.py Normal file
View File

@@ -0,0 +1,294 @@
#!/usr/bin/env python3
"""Test script to diagnose SMTP connection issues.
This script tests SMTP connectivity with different configurations to help
identify whether the issue is with credentials, network, ports, or TLS settings.
"""
import smtplib
import ssl
import sys
from datetime import datetime
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
# Load configuration from config.yaml
try:
from alpine_bits_python.config_loader import load_config
print("Loading configuration from config.yaml...")
config = load_config()
email_config = config.get("email", {})
smtp_config = email_config.get("smtp", {})
SMTP_HOST = smtp_config.get("host", "smtp.titan.email")
SMTP_PORT = smtp_config.get("port", 465)
SMTP_USERNAME = smtp_config.get("username", "")
SMTP_PASSWORD = smtp_config.get("password", "")
USE_TLS = smtp_config.get("use_tls", False)
USE_SSL = smtp_config.get("use_ssl", True)
FROM_ADDRESS = email_config.get("from_address", "info@99tales.net")
FROM_NAME = email_config.get("from_name", "AlpineBits Monitor")
# Get test recipient
monitoring_config = email_config.get("monitoring", {})
daily_report = monitoring_config.get("daily_report", {})
recipients = daily_report.get("recipients", [])
TEST_RECIPIENT = recipients[0] if recipients else "jonas@vaius.ai"
print(f"✓ Configuration loaded successfully")
print(f" SMTP Host: {SMTP_HOST}")
print(f" SMTP Port: {SMTP_PORT}")
print(f" Username: {SMTP_USERNAME}")
print(f" Password: {'***' if SMTP_PASSWORD else '(not set)'}")
print(f" Use SSL: {USE_SSL}")
print(f" Use TLS: {USE_TLS}")
print(f" From: {FROM_ADDRESS}")
print(f" Test Recipient: {TEST_RECIPIENT}")
print()
except Exception as e:
print(f"✗ Failed to load configuration: {e}")
print("Using default values for testing...")
SMTP_HOST = "smtp.titan.email"
SMTP_PORT = 465
SMTP_USERNAME = input("Enter SMTP username: ")
SMTP_PASSWORD = input("Enter SMTP password: ")
USE_TLS = False
USE_SSL = True
FROM_ADDRESS = "info@99tales.net"
FROM_NAME = "AlpineBits Monitor"
TEST_RECIPIENT = input("Enter test recipient email: ")
print()
def create_test_message(subject: str) -> MIMEMultipart:
"""Create a test email message."""
msg = MIMEMultipart("alternative")
msg["Subject"] = subject
msg["From"] = f"{FROM_NAME} <{FROM_ADDRESS}>"
msg["To"] = TEST_RECIPIENT
msg["Date"] = datetime.now().strftime("%a, %d %b %Y %H:%M:%S %z")
body = f"""SMTP Connection Test - {datetime.now().strftime("%Y-%m-%d %H:%M:%S")}
This is a test email to verify SMTP connectivity.
Configuration:
- SMTP Host: {SMTP_HOST}
- SMTP Port: {SMTP_PORT}
- Use SSL: {USE_SSL}
- Use TLS: {USE_TLS}
If you received this email, the SMTP configuration is working correctly!
"""
msg.attach(MIMEText(body, "plain"))
return msg
def test_smtp_connection(host: str, port: int, timeout: int = 10) -> bool:
"""Test basic TCP connection to SMTP server."""
import socket
print(f"Test 1: Testing TCP connection to {host}:{port}...")
try:
sock = socket.create_connection((host, port), timeout=timeout)
sock.close()
print(f"✓ TCP connection successful to {host}:{port}")
return True
except socket.timeout:
print(f"✗ Connection timed out after {timeout} seconds")
print(f" This suggests a network/firewall issue blocking access to {host}:{port}")
return False
except socket.error as e:
print(f"✗ Connection failed: {e}")
return False
def test_smtp_ssl(host: str, port: int, username: str, password: str, timeout: int = 30) -> bool:
"""Test SMTP connection with SSL."""
print(f"\nTest 2: Testing SMTP with SSL (port {port})...")
try:
context = ssl.create_default_context()
with smtplib.SMTP_SSL(host, port, timeout=timeout, context=context) as server:
print(f"✓ Connected to SMTP server with SSL")
# Try to get server info
server.ehlo()
print(f"✓ EHLO successful")
# Try authentication if credentials provided
if username and password:
print(f" Attempting authentication as: {username}")
server.login(username, password)
print(f"✓ Authentication successful")
else:
print(f"⚠ No credentials provided, skipping authentication")
return True
except smtplib.SMTPAuthenticationError as e:
print(f"✗ Authentication failed: {e}")
print(f" Check your username and password")
return False
except socket.timeout:
print(f"✗ Connection timed out after {timeout} seconds")
print(f" Try increasing timeout or check network/firewall")
return False
except Exception as e:
print(f"✗ SMTP SSL failed: {e}")
return False
def test_smtp_tls(host: str, port: int, username: str, password: str, timeout: int = 30) -> bool:
"""Test SMTP connection with STARTTLS."""
print(f"\nTest 3: Testing SMTP with STARTTLS (port {port})...")
try:
with smtplib.SMTP(host, port, timeout=timeout) as server:
print(f"✓ Connected to SMTP server")
# Try STARTTLS
context = ssl.create_default_context()
server.starttls(context=context)
print(f"✓ STARTTLS successful")
# Try authentication if credentials provided
if username and password:
print(f" Attempting authentication as: {username}")
server.login(username, password)
print(f"✓ Authentication successful")
else:
print(f"⚠ No credentials provided, skipping authentication")
return True
except smtplib.SMTPAuthenticationError as e:
print(f"✗ Authentication failed: {e}")
return False
except socket.timeout:
print(f"✗ Connection timed out after {timeout} seconds")
return False
except Exception as e:
print(f"✗ SMTP TLS failed: {e}")
return False
def send_test_email(host: str, port: int, username: str, password: str,
use_ssl: bool, use_tls: bool, timeout: int = 30) -> bool:
"""Send an actual test email."""
print(f"\nTest 4: Sending test email...")
try:
msg = create_test_message("SMTP Test Email - AlpineBits")
if use_ssl:
context = ssl.create_default_context()
with smtplib.SMTP_SSL(host, port, timeout=timeout, context=context) as server:
if username and password:
server.login(username, password)
server.send_message(msg, FROM_ADDRESS, [TEST_RECIPIENT])
else:
with smtplib.SMTP(host, port, timeout=timeout) as server:
if use_tls:
context = ssl.create_default_context()
server.starttls(context=context)
if username and password:
server.login(username, password)
server.send_message(msg, FROM_ADDRESS, [TEST_RECIPIENT])
print(f"✓ Test email sent successfully to {TEST_RECIPIENT}")
print(f" Check your inbox!")
return True
except Exception as e:
print(f"✗ Failed to send email: {e}")
return False
def main():
"""Run all SMTP tests."""
print("=" * 70)
print("SMTP Connection Test Script")
print("=" * 70)
print()
# Test 1: Basic TCP connection
tcp_ok = test_smtp_connection(SMTP_HOST, SMTP_PORT, timeout=10)
if not tcp_ok:
print("\n" + "=" * 70)
print("DIAGNOSIS: Cannot establish TCP connection to SMTP server")
print("=" * 70)
print("\nPossible causes:")
print("1. The SMTP server is down or unreachable")
print("2. A firewall is blocking the connection")
print("3. The host or port is incorrect")
print("4. Network connectivity issues from your container/server")
print("\nTroubleshooting:")
print(f"- Verify the server is correct: {SMTP_HOST}")
print(f"- Verify the port is correct: {SMTP_PORT}")
print("- Check if your container/server has outbound internet access")
print("- Try from a different network or machine")
print(f"- Use telnet/nc to test: telnet {SMTP_HOST} {SMTP_PORT}")
return 1
# Test 2 & 3: Try both SSL and TLS
ssl_ok = False
tls_ok = False
if USE_SSL:
ssl_ok = test_smtp_ssl(SMTP_HOST, SMTP_PORT, SMTP_USERNAME, SMTP_PASSWORD, timeout=30)
# Also try common alternative ports
if not ssl_ok and SMTP_PORT == 465:
print("\n⚠ Port 465 failed, trying port 587 with STARTTLS...")
tls_ok = test_smtp_tls(SMTP_HOST, 587, SMTP_USERNAME, SMTP_PASSWORD, timeout=30)
if USE_TLS:
tls_ok = test_smtp_tls(SMTP_HOST, SMTP_PORT, SMTP_USERNAME, SMTP_PASSWORD, timeout=30)
if not ssl_ok and not tls_ok:
print("\n" + "=" * 70)
print("DIAGNOSIS: Cannot authenticate or establish secure connection")
print("=" * 70)
print("\nPossible causes:")
print("1. Wrong username or password")
print("2. Wrong port for the encryption method")
print("3. SSL/TLS version mismatch")
print("\nTroubleshooting:")
print("- Verify your credentials are correct")
print("- Port 465 typically uses SSL")
print("- Port 587 typically uses STARTTLS")
print("- Port 25 is usually unencrypted (not recommended)")
return 1
# Test 4: Send actual email
send_ok = send_test_email(
SMTP_HOST, SMTP_PORT, SMTP_USERNAME, SMTP_PASSWORD,
USE_SSL, USE_TLS, timeout=30
)
print("\n" + "=" * 70)
if send_ok:
print("✓ ALL TESTS PASSED!")
print("=" * 70)
print("\nYour SMTP configuration is working correctly.")
print(f"Check {TEST_RECIPIENT} for the test email.")
else:
print("⚠ PARTIAL SUCCESS")
print("=" * 70)
print("\nConnection and authentication work, but email sending failed.")
print("This might be a temporary issue. Try again.")
return 0
if __name__ == "__main__":
try:
sys.exit(main())
except KeyboardInterrupt:
print("\n\nTest cancelled by user")
sys.exit(1)

358
tests/test_email_service.py Normal file
View File

@@ -0,0 +1,358 @@
"""Tests for email service and monitoring functionality."""
import asyncio
import logging
from datetime import datetime
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from alpine_bits_python.email_monitoring import (
DailyReportScheduler,
EmailAlertHandler,
ErrorRecord,
)
from alpine_bits_python.email_service import EmailConfig, EmailService
class TestEmailConfig:
"""Tests for EmailConfig class."""
def test_email_config_initialization(self):
"""Test basic email configuration initialization."""
config = {
"smtp": {
"host": "smtp.example.com",
"port": 587,
"username": "test@example.com",
"password": "password123",
"use_tls": True,
"use_ssl": False,
},
"from_address": "sender@example.com",
"from_name": "Test Sender",
}
email_config = EmailConfig(config)
assert email_config.smtp_host == "smtp.example.com"
assert email_config.smtp_port == 587
assert email_config.smtp_username == "test@example.com"
assert email_config.smtp_password == "password123"
assert email_config.use_tls is True
assert email_config.use_ssl is False
assert email_config.from_address == "sender@example.com"
assert email_config.from_name == "Test Sender"
def test_email_config_defaults(self):
"""Test email configuration with default values."""
config = {}
email_config = EmailConfig(config)
assert email_config.smtp_host == "localhost"
assert email_config.smtp_port == 587
assert email_config.use_tls is True
assert email_config.use_ssl is False
assert email_config.from_address == "noreply@example.com"
def test_email_config_tls_ssl_conflict(self):
"""Test that TLS and SSL cannot both be enabled."""
config = {
"smtp": {
"use_tls": True,
"use_ssl": True,
}
}
with pytest.raises(ValueError, match="Cannot use both TLS and SSL"):
EmailConfig(config)
class TestEmailService:
"""Tests for EmailService class."""
@pytest.fixture
def email_config(self):
"""Provide a test email configuration."""
return EmailConfig(
{
"smtp": {
"host": "smtp.example.com",
"port": 587,
"username": "test@example.com",
"password": "password123",
"use_tls": True,
},
"from_address": "sender@example.com",
"from_name": "Test Sender",
}
)
@pytest.fixture
def email_service(self, email_config):
"""Provide an EmailService instance."""
return EmailService(email_config)
@pytest.mark.asyncio
async def test_send_email_success(self, email_service):
"""Test successful email sending."""
with patch.object(email_service, "_send_smtp") as mock_smtp:
result = await email_service.send_email(
recipients=["test@example.com"],
subject="Test Subject",
body="Test body",
)
assert result is True
assert mock_smtp.called
@pytest.mark.asyncio
async def test_send_email_no_recipients(self, email_service):
"""Test email sending with no recipients."""
result = await email_service.send_email(
recipients=[],
subject="Test Subject",
body="Test body",
)
assert result is False
@pytest.mark.asyncio
async def test_send_email_with_html(self, email_service):
"""Test email sending with HTML body."""
with patch.object(email_service, "_send_smtp") as mock_smtp:
result = await email_service.send_email(
recipients=["test@example.com"],
subject="Test Subject",
body="Plain text body",
html_body="<html><body>HTML body</body></html>",
)
assert result is True
assert mock_smtp.called
@pytest.mark.asyncio
async def test_send_alert(self, email_service):
"""Test send_alert convenience method."""
with patch.object(
email_service, "send_email", new_callable=AsyncMock
) as mock_send:
mock_send.return_value = True
result = await email_service.send_alert(
recipients=["test@example.com"],
subject="Alert Subject",
body="Alert body",
)
assert result is True
mock_send.assert_called_once_with(
["test@example.com"], "Alert Subject", "Alert body"
)
@pytest.mark.asyncio
async def test_send_daily_report(self, email_service):
"""Test daily report email generation and sending."""
with patch.object(
email_service, "send_email", new_callable=AsyncMock
) as mock_send:
mock_send.return_value = True
stats = {
"total_reservations": 42,
"new_customers": 15,
}
errors = [
{
"timestamp": "2025-10-15 10:30:00",
"level": "ERROR",
"message": "Test error message",
}
]
result = await email_service.send_daily_report(
recipients=["admin@example.com"],
stats=stats,
errors=errors,
)
assert result is True
assert mock_send.called
call_args = mock_send.call_args
assert "admin@example.com" in call_args[0][0]
assert "Daily Report" in call_args[0][1]
class TestErrorRecord:
"""Tests for ErrorRecord class."""
def test_error_record_creation(self):
"""Test creating an ErrorRecord from a logging record."""
log_record = logging.LogRecord(
name="test.logger",
level=logging.ERROR,
pathname="/path/to/file.py",
lineno=42,
msg="Test error message",
args=(),
exc_info=None,
)
error_record = ErrorRecord(log_record)
assert error_record.level == "ERROR"
assert error_record.logger_name == "test.logger"
assert error_record.message == "Test error message"
assert error_record.module == "file"
assert error_record.line_no == 42
def test_error_record_to_dict(self):
"""Test converting ErrorRecord to dictionary."""
log_record = logging.LogRecord(
name="test.logger",
level=logging.ERROR,
pathname="/path/to/file.py",
lineno=42,
msg="Test error",
args=(),
exc_info=None,
)
error_record = ErrorRecord(log_record)
error_dict = error_record.to_dict()
assert error_dict["level"] == "ERROR"
assert error_dict["message"] == "Test error"
assert error_dict["line_no"] == 42
assert "timestamp" in error_dict
def test_error_record_format_plain_text(self):
"""Test formatting ErrorRecord as plain text."""
log_record = logging.LogRecord(
name="test.logger",
level=logging.ERROR,
pathname="/path/to/file.py",
lineno=42,
msg="Test error",
args=(),
exc_info=None,
)
error_record = ErrorRecord(log_record)
formatted = error_record.format_plain_text()
assert "ERROR" in formatted
assert "Test error" in formatted
assert "file:42" in formatted
class TestEmailAlertHandler:
"""Tests for EmailAlertHandler class."""
@pytest.fixture
def mock_email_service(self):
"""Provide a mock email service."""
service = MagicMock(spec=EmailService)
service.send_alert = AsyncMock(return_value=True)
return service
@pytest.fixture
def handler_config(self):
"""Provide handler configuration."""
return {
"recipients": ["alert@example.com"],
"error_threshold": 3,
"buffer_minutes": 1, # Short for testing
"cooldown_minutes": 5,
"log_levels": ["ERROR", "CRITICAL"],
}
@pytest.fixture
def alert_handler(self, mock_email_service, handler_config):
"""Provide an EmailAlertHandler instance."""
loop = asyncio.new_event_loop()
handler = EmailAlertHandler(mock_email_service, handler_config, loop)
yield handler
loop.close()
def test_handler_initialization(self, alert_handler, handler_config):
"""Test handler initialization."""
assert alert_handler.error_threshold == 3
assert alert_handler.buffer_minutes == 1
assert alert_handler.cooldown_minutes == 5
assert alert_handler.recipients == ["alert@example.com"]
def test_handler_ignores_non_error_levels(self, alert_handler):
"""Test that handler ignores INFO and WARNING levels."""
log_record = logging.LogRecord(
name="test",
level=logging.INFO,
pathname="/test.py",
lineno=1,
msg="Info message",
args=(),
exc_info=None,
)
alert_handler.emit(log_record)
# Should not buffer INFO messages
assert len(alert_handler.error_buffer) == 0
class TestDailyReportScheduler:
"""Tests for DailyReportScheduler class."""
@pytest.fixture
def mock_email_service(self):
"""Provide a mock email service."""
service = MagicMock(spec=EmailService)
service.send_daily_report = AsyncMock(return_value=True)
return service
@pytest.fixture
def scheduler_config(self):
"""Provide scheduler configuration."""
return {
"recipients": ["report@example.com"],
"send_time": "08:00",
"include_stats": True,
"include_errors": True,
}
@pytest.fixture
def scheduler(self, mock_email_service, scheduler_config):
"""Provide a DailyReportScheduler instance."""
return DailyReportScheduler(mock_email_service, scheduler_config)
def test_scheduler_initialization(self, scheduler, scheduler_config):
"""Test scheduler initialization."""
assert scheduler.send_time == "08:00"
assert scheduler.recipients == ["report@example.com"]
assert scheduler.include_stats is True
assert scheduler.include_errors is True
def test_scheduler_log_error(self, scheduler):
"""Test logging errors for daily report."""
error = {
"timestamp": datetime.now().isoformat(),
"level": "ERROR",
"message": "Test error",
}
scheduler.log_error(error)
assert len(scheduler._error_log) == 1
assert scheduler._error_log[0]["message"] == "Test error"
def test_scheduler_set_stats_collector(self, scheduler):
"""Test setting stats collector function."""
async def mock_collector():
return {"test": "stats"}
scheduler.set_stats_collector(mock_collector)
assert scheduler._stats_collector is mock_collector

View File

@@ -0,0 +1,62 @@
#!/usr/bin/env python3
"""Test script to verify worker coordination with file locking.
This simulates multiple workers trying to acquire the primary worker lock.
"""
import multiprocessing
import time
from pathlib import Path
from src.alpine_bits_python.worker_coordination import WorkerLock
def worker_process(worker_id: int, lock_file: str):
"""Simulate a worker process trying to acquire the lock."""
print(f"Worker {worker_id} (PID {multiprocessing.current_process().pid}): Starting")
lock = WorkerLock(lock_file)
is_primary = lock.acquire()
if is_primary:
print(f"Worker {worker_id} (PID {multiprocessing.current_process().pid}): ✓ I am PRIMARY")
# Simulate running singleton services
time.sleep(3)
print(f"Worker {worker_id} (PID {multiprocessing.current_process().pid}): Releasing lock")
lock.release()
else:
print(f"Worker {worker_id} (PID {multiprocessing.current_process().pid}): ✗ I am SECONDARY")
# Simulate regular worker work
time.sleep(3)
print(f"Worker {worker_id} (PID {multiprocessing.current_process().pid}): Exiting")
if __name__ == "__main__":
# Use a test lock file
lock_file = "/tmp/test_alpinebits_worker.lock"
# Clean up any existing lock file
Path(lock_file).unlink(missing_ok=True)
print("Starting 4 worker processes (simulating uvicorn --workers 4)")
print("=" * 70)
# Start multiple workers
processes = []
for i in range(4):
p = multiprocessing.Process(target=worker_process, args=(i, lock_file))
p.start()
processes.append(p)
# Small delay to make output clearer
time.sleep(0.1)
# Wait for all workers to complete
for p in processes:
p.join()
print("=" * 70)
print("✓ Test complete: Only ONE worker should have been PRIMARY")
# Clean up
Path(lock_file).unlink(missing_ok=True)

100
uv.lock generated
View File

@@ -22,10 +22,12 @@ dependencies = [
{ name = "aiosqlite" },
{ name = "annotatedyaml" },
{ name = "dotenv" },
{ name = "fast-langdetect" },
{ name = "fastapi" },
{ name = "generateds" },
{ name = "httpx" },
{ name = "lxml" },
{ name = "pushover-complete" },
{ name = "pydantic", extra = ["email"] },
{ name = "pytest" },
{ name = "pytest-asyncio" },
@@ -49,10 +51,12 @@ requires-dist = [
{ name = "aiosqlite", specifier = ">=0.21.0" },
{ name = "annotatedyaml", specifier = ">=1.0.0" },
{ name = "dotenv", specifier = ">=0.9.9" },
{ name = "fast-langdetect", specifier = ">=1.0.0" },
{ name = "fastapi", specifier = ">=0.117.1" },
{ name = "generateds", specifier = ">=2.44.3" },
{ name = "httpx", specifier = ">=0.28.1" },
{ name = "lxml", specifier = ">=6.0.1" },
{ name = "pushover-complete", specifier = ">=2.0.0" },
{ name = "pydantic", extras = ["email"], specifier = ">=2.11.9" },
{ name = "pytest", specifier = ">=8.4.2" },
{ name = "pytest-asyncio", specifier = ">=1.2.0" },
@@ -204,6 +208,18 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335, upload-time = "2022-10-25T02:36:20.889Z" },
]
[[package]]
name = "colorlog"
version = "6.9.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/d3/7a/359f4d5df2353f26172b3cc39ea32daa39af8de522205f512f458923e677/colorlog-6.9.0.tar.gz", hash = "sha256:bfba54a1b93b94f54e1f4fe48395725a3d92fd2a4af702f6bd70946bdc0c6ac2", size = 16624, upload-time = "2024-10-29T18:34:51.011Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e3/51/9b208e85196941db2f0654ad0357ca6388ab3ed67efdbfc799f35d1f83aa/colorlog-6.9.0-py3-none-any.whl", hash = "sha256:5906e71acd67cb07a71e779c47c4bcb45fb8c2993eebe9e5adcd6a6f1b283eff", size = 11424, upload-time = "2024-10-29T18:34:49.815Z" },
]
[[package]]
name = "coverage"
version = "7.10.7"
@@ -323,6 +339,20 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/de/15/545e2b6cf2e3be84bc1ed85613edd75b8aea69807a71c26f4ca6a9258e82/email_validator-2.3.0-py3-none-any.whl", hash = "sha256:80f13f623413e6b197ae73bb10bf4eb0908faf509ad8362c5edeb0be7fd450b4", size = 35604, upload-time = "2025-08-26T13:09:05.858Z" },
]
[[package]]
name = "fast-langdetect"
version = "1.0.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "fasttext-predict" },
{ name = "requests" },
{ name = "robust-downloader" },
]
sdist = { url = "https://files.pythonhosted.org/packages/53/15/85b0137066be418b6249d8e8d98e2b16c072c65b80c293b9438fdea1be5e/fast_langdetect-1.0.0.tar.gz", hash = "sha256:ea8ac6a8914e0ff1bfc1bbc0f25992eb913ddb69e63ea1b24e907e263d0cd113", size = 796192, upload-time = "2025-09-17T06:32:26.86Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/f6/71/0db1ac89f8661048ebc22d62f503a2e147cb6872c5f2aeb659c1f02c1694/fast_langdetect-1.0.0-py3-none-any.whl", hash = "sha256:aab9e3435cc667ac8ba8b1a38872f75492f65b7087901d0f3a02a88d436cd22a", size = 789944, upload-time = "2025-09-17T06:32:25.363Z" },
]
[[package]]
name = "fastapi"
version = "0.117.1"
@@ -337,6 +367,38 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/6d/45/d9d3e8eeefbe93be1c50060a9d9a9f366dba66f288bb518a9566a23a8631/fastapi-0.117.1-py3-none-any.whl", hash = "sha256:33c51a0d21cab2b9722d4e56dbb9316f3687155be6b276191790d8da03507552", size = 95959, upload-time = "2025-09-20T20:16:53.661Z" },
]
[[package]]
name = "fasttext-predict"
version = "0.9.2.4"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/fc/0e/9defbb9385bcb1104cc1d686a14f7d9fafe5fe43f220cccb00f33d91bb47/fasttext_predict-0.9.2.4.tar.gz", hash = "sha256:18a6fb0d74c7df9280db1f96cb75d990bfd004fa9d669493ea3dd3d54f84dbc7", size = 16332, upload-time = "2024-11-23T17:24:44.801Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/89/fc/5cd65224c33e33d6faec3fa1047162dc266ed2213016139d936bd36fb7c3/fasttext_predict-0.9.2.4-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ddb85e62c95e4e02d417c782e3434ef65554df19e3522f5230f6be15a9373c05", size = 104916, upload-time = "2024-11-23T17:23:43.367Z" },
{ url = "https://files.pythonhosted.org/packages/d9/53/8d542773e32c9d98dd8c680e390fe7e6d4fc92ab3439dc1bb8e70c46c7ad/fasttext_predict-0.9.2.4-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:102129d45cf98dda871e83ae662f71d999b9ef6ff26bc842ffc1520a1f82930c", size = 97502, upload-time = "2024-11-23T17:23:44.447Z" },
{ url = "https://files.pythonhosted.org/packages/50/99/049fd6b01937705889bd9a00c31e5c55f0ae4b7704007b2ef7a82bf2b867/fasttext_predict-0.9.2.4-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:05ba6a0fbf8cb2141b8ca2bc461db97af8ac31a62341e4696a75048b9de39e10", size = 282951, upload-time = "2024-11-23T17:23:46.31Z" },
{ url = "https://files.pythonhosted.org/packages/83/cb/79b71709edbb53c3c5f8a8b60fe2d3bc98d28a8e75367c89afedf3307aa9/fasttext_predict-0.9.2.4-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0c7a779215571296ecfcf86545cb30ec3f1c6f43cbcd69f83cc4f67049375ea1", size = 307377, upload-time = "2024-11-23T17:23:47.685Z" },
{ url = "https://files.pythonhosted.org/packages/7c/4a/b15b7be003e76613173cc77d9c6cce4bf086073079354e0177deaa768f59/fasttext_predict-0.9.2.4-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ddd2f03f3f206585543f5274b1dbc5f651bae141a1b14c9d5225c2a12e5075c2", size = 295746, upload-time = "2024-11-23T17:23:49.024Z" },
{ url = "https://files.pythonhosted.org/packages/e3/d3/f030cd45bdd4b052fcf23e730fdf0804e024b0cad43d7c7f8704faaec2f5/fasttext_predict-0.9.2.4-cp313-cp313-manylinux_2_31_armv7l.whl", hash = "sha256:748f9edc3222a1fb7a61331c4e06d3b7f2390ae493f91f09d372a00b81762a8d", size = 236939, upload-time = "2024-11-23T17:23:50.306Z" },
{ url = "https://files.pythonhosted.org/packages/a2/01/6f2985afd58fdc5f4ecd058d5d9427d03081d468960982df97316c03f6bb/fasttext_predict-0.9.2.4-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:1aee47a40757cd24272b34eaf9ceeea86577fd0761b0fd0e41599c6549abdf04", size = 1214189, upload-time = "2024-11-23T17:23:51.647Z" },
{ url = "https://files.pythonhosted.org/packages/75/07/931bcdd4e2406e45e54d57e056c2e0766616a5280a18fbf6ef078aa439ab/fasttext_predict-0.9.2.4-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:6ff0f152391ee03ffc18495322100c01735224f7843533a7c4ff33c8853d7be1", size = 1099889, upload-time = "2024-11-23T17:23:53.127Z" },
{ url = "https://files.pythonhosted.org/packages/a2/eb/6521b4bbf387252a96a6dc0f54986f078a93db0a9d4ba77258dcf1fa8be7/fasttext_predict-0.9.2.4-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:4d92f5265318b41d6e68659fd459babbff692484e492c5013995b90a56b517c9", size = 1383959, upload-time = "2024-11-23T17:23:54.521Z" },
{ url = "https://files.pythonhosted.org/packages/b7/6b/d56606761afb3a3912c52971f0f804e2e9065f049c412b96c47d6fca6218/fasttext_predict-0.9.2.4-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:3a7720cce1b8689d88df76cac1425e84f9911c69a4e40a5309d7d3435e1bb97c", size = 1281097, upload-time = "2024-11-23T17:23:55.9Z" },
{ url = "https://files.pythonhosted.org/packages/91/83/55bb4a37bb3b3a428941f4e1323c345a662254f576f8860b3098d9742510/fasttext_predict-0.9.2.4-cp313-cp313-win32.whl", hash = "sha256:d16acfced7871ed0cd55b476f0dbdddc7a5da1ffc9745a3c5674846cf1555886", size = 91137, upload-time = "2024-11-23T17:23:57.886Z" },
{ url = "https://files.pythonhosted.org/packages/9c/1d/c1ccc8790ce54200c84164d99282f088dddb9760aeefc8860856aafa40b4/fasttext_predict-0.9.2.4-cp313-cp313-win_amd64.whl", hash = "sha256:96a23328729ce62a851f8953582e576ca075ee78d637df4a78a2b3609784849e", size = 104896, upload-time = "2024-11-23T17:23:59.028Z" },
{ url = "https://files.pythonhosted.org/packages/a4/c9/a1ccc749c59e2480767645ecc03bd842a7fa5b2b780d69ac370e6f8298d2/fasttext_predict-0.9.2.4-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:b1357d0d9d8568db84668b57e7c6880b9c46f757e8954ad37634402d36f09dba", size = 109401, upload-time = "2024-11-23T17:24:00.191Z" },
{ url = "https://files.pythonhosted.org/packages/90/1f/33182b76eb0524155e8ff93e7939feaf5325385e5ff2a154f383d9a02317/fasttext_predict-0.9.2.4-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:9604c464c5d86c7eba34b040080be7012e246ef512b819e428b7deb817290dae", size = 102131, upload-time = "2024-11-23T17:24:02.052Z" },
{ url = "https://files.pythonhosted.org/packages/2b/df/1886daea373382e573f28ce49e3fc8fb6b0ee0c84e2b0becf5b254cd93fb/fasttext_predict-0.9.2.4-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cc6da186c2e4497cbfaba9c5424e58c7b72728b25d980829eb96daccd7cface1", size = 287396, upload-time = "2024-11-23T17:24:03.294Z" },
{ url = "https://files.pythonhosted.org/packages/35/8f/d1c2c0f0251bee898d508253a437683b0480a1074cfb25ded1f7fdbb925a/fasttext_predict-0.9.2.4-cp313-cp313t-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:366ed2ca4f4170418f3585e92059cf17ee2c963bf179111c5b8ba48f06cd69d1", size = 311090, upload-time = "2024-11-23T17:24:04.625Z" },
{ url = "https://files.pythonhosted.org/packages/5d/52/07d6ed46148662fae84166bc69d944caca87fabc850ebfbd9640b20dafe7/fasttext_predict-0.9.2.4-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2f1877edbb815a43e7d38cc7332202e759054cf0b5a4b7e34a743c0f5d6e7333", size = 300359, upload-time = "2024-11-23T17:24:06.486Z" },
{ url = "https://files.pythonhosted.org/packages/fa/a1/751ff471a991e5ed0bae9e7fa6fc8d8ab76b233a7838a27d70d62bed0c8e/fasttext_predict-0.9.2.4-cp313-cp313t-manylinux_2_31_armv7l.whl", hash = "sha256:f63c31352ba6fc910290b0fe12733770acd8cfa0945fcb9cf3984d241abcfc9d", size = 241164, upload-time = "2024-11-23T17:24:08.501Z" },
{ url = "https://files.pythonhosted.org/packages/94/19/e251f699a0e9c001fa672ea0929c456160faa68ecfafc19e8def09982b6a/fasttext_predict-0.9.2.4-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:898e14b03fbfb0a8d9a5185a0a00ff656772b3baa37cad122e06e8e4d6da3832", size = 1218629, upload-time = "2024-11-23T17:24:10.04Z" },
{ url = "https://files.pythonhosted.org/packages/1d/46/1af2f779f8cfd746496a226581f747d3051888e3e3c5b2ca37231e5d04f8/fasttext_predict-0.9.2.4-cp313-cp313t-musllinux_1_2_armv7l.whl", hash = "sha256:a33bb5832a69fc54d18cadcf015677c1acb5ccc7f0125d261df2a89f8aff01f6", size = 1100535, upload-time = "2024-11-23T17:24:11.5Z" },
{ url = "https://files.pythonhosted.org/packages/4c/b7/900ccd74a9ba8be7ca6d04bba684e9c43fb0dbed8a3d12ec0536228e2c32/fasttext_predict-0.9.2.4-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:7fe9e98bd0701d598bf245eb2fbf592145cd03551684a2102a4b301294b9bd87", size = 1387651, upload-time = "2024-11-23T17:24:13.135Z" },
{ url = "https://files.pythonhosted.org/packages/0b/5a/99fdaed054079f7c96e70df0d7016c4eb6b9e487a614396dd8f849244a52/fasttext_predict-0.9.2.4-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:dcb8c5a74c1785f005fd83d445137437b79ac70a2dfbfe4bb1b09aa5643be545", size = 1286189, upload-time = "2024-11-23T17:24:14.615Z" },
{ url = "https://files.pythonhosted.org/packages/87/6a/9114d65b3f7a9c20a62b9d2ca3b770ee65de849e4131cc7aa58cdc50cb07/fasttext_predict-0.9.2.4-cp313-cp313t-win32.whl", hash = "sha256:a85c7de3d4480faa12b930637fca9c23144d1520786fedf9ba8edd8642ed4aea", size = 95905, upload-time = "2024-11-23T17:24:15.868Z" },
{ url = "https://files.pythonhosted.org/packages/31/fb/6d251f3fdfe3346ee60d091f55106513e509659ee005ad39c914182c96f4/fasttext_predict-0.9.2.4-cp313-cp313t-win_amd64.whl", hash = "sha256:be0933fa4af7abae09c703d28f9e17c80e7069eb6f92100b21985b777f4ea275", size = 110325, upload-time = "2024-11-23T17:24:16.984Z" },
]
[[package]]
name = "generateds"
version = "2.44.3"
@@ -586,6 +648,18 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/cc/35/cc0aaecf278bb4575b8555f2b137de5ab821595ddae9da9d3cd1da4072c7/propcache-0.3.2-py3-none-any.whl", hash = "sha256:98f1ec44fb675f5052cccc8e609c46ed23a35a1cfd18545ad4e29002d858a43f", size = 12663, upload-time = "2025-06-09T22:56:04.484Z" },
]
[[package]]
name = "pushover-complete"
version = "2.0.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "requests" },
]
sdist = { url = "https://files.pythonhosted.org/packages/db/ae/2ed5c277e22316d8a31e2f67c6c9fd5021189ed3754e144aad53d874d687/pushover_complete-2.0.0.tar.gz", hash = "sha256:24fc7d84d73426840e7678fee80d36f40df0114cb30352ba4f99ab3842ed21a7", size = 19035, upload-time = "2025-05-20T12:47:59.464Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/9e/c2/7debacdeb30d5956e5c5573f129ea2a422eeaaba8993ddfc61c9c0e54c95/pushover_complete-2.0.0-py3-none-any.whl", hash = "sha256:9dbb540daf86b26375e0aaa4b798ad5936b27047ee82cf3213bafeee96929527", size = 9952, upload-time = "2025-05-20T12:47:58.248Z" },
]
[[package]]
name = "pydantic"
version = "2.11.9"
@@ -754,6 +828,20 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/1e/db/4254e3eabe8020b458f1a747140d32277ec7a271daf1d235b70dc0b4e6e3/requests-2.32.5-py3-none-any.whl", hash = "sha256:2462f94637a34fd532264295e186976db0f5d453d1cdd31473c85a6a161affb6", size = 64738, upload-time = "2025-08-18T20:46:00.542Z" },
]
[[package]]
name = "robust-downloader"
version = "0.0.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorlog" },
{ name = "requests" },
{ name = "tqdm" },
]
sdist = { url = "https://files.pythonhosted.org/packages/63/20/8d28efa080f58fa06f6378875ac482ee511c076369e5293a2e65128cf9a0/robust-downloader-0.0.2.tar.gz", hash = "sha256:08c938b96e317abe6b037e34230a91bda9b5d613f009bca4a47664997c61de90", size = 15785, upload-time = "2023-11-13T03:00:20.637Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/56/a1/779e9d0ebbdc704411ce30915a1105eb01aeaa9e402d7e446613ff8fb121/robust_downloader-0.0.2-py3-none-any.whl", hash = "sha256:8fe08bfb64d714fd1a048a7df6eb7b413eb4e624309a49db2c16fbb80a62869d", size = 15534, upload-time = "2023-11-13T03:00:18.957Z" },
]
[[package]]
name = "ruff"
version = "0.13.1"
@@ -852,6 +940,18 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/f6/17/57b444fd314d5e1593350b9a31d000e7411ba8e17ce12dc7ad54ca76b810/toposort-1.10-py3-none-any.whl", hash = "sha256:cbdbc0d0bee4d2695ab2ceec97fe0679e9c10eab4b2a87a9372b929e70563a87", size = 8500, upload-time = "2023-02-25T20:07:06.538Z" },
]
[[package]]
name = "tqdm"
version = "4.67.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a8/4b/29b4ef32e036bb34e4ab51796dd745cdba7ed47ad142a9f4a1eb8e0c744d/tqdm-4.67.1.tar.gz", hash = "sha256:f8aef9c52c08c13a65f30ea34f4e5aac3fd1a34959879d7e59e63027286627f2", size = 169737, upload-time = "2024-11-24T20:12:22.481Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl", hash = "sha256:26445eca388f82e72884e0d580d5464cd801a3ea01e63e5601bdff9ba6a48de2", size = 78540, upload-time = "2024-11-24T20:12:19.698Z" },
]
[[package]]
name = "typing-extensions"
version = "4.15.0"