Alembic experiments
This commit is contained in:
174
MIGRATION_REFACTORING.md
Normal file
174
MIGRATION_REFACTORING.md
Normal file
@@ -0,0 +1,174 @@
|
||||
# Database Migration Refactoring
|
||||
|
||||
## Summary
|
||||
|
||||
This refactoring changes the database handling from manual schema migrations in `migrations.py` to using Alembic for proper database migrations. The key improvements are:
|
||||
|
||||
1. **Alembic Integration**: All schema migrations now use Alembic's migration framework
|
||||
2. **Separation of Concerns**: Migrations (schema changes) are separated from startup tasks (data backfills)
|
||||
3. **Pre-startup Migrations**: Database migrations run BEFORE the application starts, avoiding issues with multiple workers
|
||||
4. **Production Ready**: The Conversions/ConversionRoom tables can be safely recreated (data is recoverable from PMS XML imports)
|
||||
|
||||
## Changes Made
|
||||
|
||||
### 1. Alembic Setup
|
||||
|
||||
- **[alembic.ini](alembic.ini)**: Configuration file for Alembic
|
||||
- **[alembic/env.py](alembic/env.py)**: Async-compatible environment setup that:
|
||||
- Loads database URL from config.yaml or environment variables
|
||||
- Supports PostgreSQL schemas
|
||||
- Uses async SQLAlchemy engine
|
||||
|
||||
### 2. Initial Migrations
|
||||
|
||||
Two migrations were created:
|
||||
|
||||
#### Migration 1: `535b70e85b64_initial_schema.py`
|
||||
Creates all base tables:
|
||||
- `customers`
|
||||
- `hashed_customers`
|
||||
- `reservations`
|
||||
- `acked_requests`
|
||||
- `conversions`
|
||||
- `conversion_rooms`
|
||||
|
||||
This migration is idempotent - it only creates missing tables.
|
||||
|
||||
#### Migration 2: `8edfc81558db_drop_and_recreate_conversions_tables.py`
|
||||
Handles the conversion from old production conversions schema to new normalized schema:
|
||||
- Detects if old conversions tables exist with incompatible schema
|
||||
- Drops them if needed (data can be recreated from PMS XML imports)
|
||||
- Allows the initial schema migration to recreate them with correct structure
|
||||
|
||||
### 3. Refactored Files
|
||||
|
||||
#### [src/alpine_bits_python/db_setup.py](src/alpine_bits_python/db_setup.py)
|
||||
- **Before**: Ran manual migrations AND created tables using Base.metadata.create_all
|
||||
- **After**: Only runs startup tasks (data backfills like customer hashing)
|
||||
- **Note**: Schema migrations now handled by Alembic
|
||||
|
||||
#### [src/alpine_bits_python/run_migrations.py](src/alpine_bits_python/run_migrations.py) (NEW)
|
||||
- Wrapper script to run `alembic upgrade head`
|
||||
- Can be called standalone or from run_api.py
|
||||
- Handles errors gracefully
|
||||
|
||||
#### [src/alpine_bits_python/api.py](src/alpine_bits_python/api.py)
|
||||
- **Removed**: `run_all_migrations()` call from lifespan
|
||||
- **Removed**: `Base.metadata.create_all()` call
|
||||
- **Changed**: Now only calls `run_startup_tasks()` for data backfills
|
||||
- **Note**: Assumes migrations have already been run before app start
|
||||
|
||||
#### [src/alpine_bits_python/run_api.py](src/alpine_bits_python/run_api.py)
|
||||
- **Added**: Calls `run_migrations()` BEFORE starting uvicorn
|
||||
- **Benefit**: Migrations complete before any worker starts
|
||||
- **Benefit**: Works correctly with multiple workers
|
||||
|
||||
### 4. Old Files (Can be removed in future cleanup)
|
||||
|
||||
- **[src/alpine_bits_python/migrations.py](src/alpine_bits_python/migrations.py)**: Old manual migration functions
|
||||
- These can be safely removed once you verify the Alembic setup works
|
||||
- The functionality has been replaced by Alembic migrations
|
||||
|
||||
## Usage
|
||||
|
||||
### Development
|
||||
|
||||
Start the server (migrations run automatically):
|
||||
```bash
|
||||
uv run python -m alpine_bits_python.run_api
|
||||
```
|
||||
|
||||
Or run migrations separately:
|
||||
```bash
|
||||
uv run alembic upgrade head
|
||||
uv run python -m alpine_bits_python.run_api
|
||||
```
|
||||
|
||||
### Production with Multiple Workers
|
||||
|
||||
The migrations automatically run before uvicorn starts, so you can safely use:
|
||||
```bash
|
||||
# Migrations run once, then server starts with multiple workers
|
||||
uv run python -m alpine_bits_python.run_api
|
||||
|
||||
# Or with uvicorn directly (migrations won't run automatically):
|
||||
uv run alembic upgrade head # Run this first
|
||||
uvicorn alpine_bits_python.api:app --workers 4 --host 0.0.0.0 --port 8080
|
||||
```
|
||||
|
||||
### Creating New Migrations
|
||||
|
||||
When you modify the database schema in `db.py`:
|
||||
|
||||
```bash
|
||||
# Generate migration automatically
|
||||
uv run alembic revision --autogenerate -m "description_of_change"
|
||||
|
||||
# Or create empty migration to fill in manually
|
||||
uv run alembic revision -m "description_of_change"
|
||||
|
||||
# Review the generated migration in alembic/versions/
|
||||
# Then apply it
|
||||
uv run alembic upgrade head
|
||||
```
|
||||
|
||||
### Checking Migration Status
|
||||
|
||||
```bash
|
||||
# Show current revision
|
||||
uv run alembic current
|
||||
|
||||
# Show migration history
|
||||
uv run alembic history
|
||||
|
||||
# Show pending migrations
|
||||
uv run alembic heads
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
1. **Multiple Worker Safe**: Migrations run once before any worker starts
|
||||
2. **Proper Migration History**: All schema changes are tracked in version control
|
||||
3. **Rollback Support**: Can downgrade to previous schema versions if needed
|
||||
4. **Standard Tool**: Alembic is the industry-standard migration tool for SQLAlchemy
|
||||
5. **Separation of Concerns**:
|
||||
- Schema migrations (Alembic) are separate from startup tasks (db_setup.py)
|
||||
- Migrations are separate from application code
|
||||
|
||||
## Migration from Old System
|
||||
|
||||
If you have an existing database with the old migration system:
|
||||
|
||||
1. The initial migration will detect existing tables and skip creating them
|
||||
2. The conversions table migration will detect old schemas and recreate them
|
||||
3. All data in other tables is preserved
|
||||
4. Conversions data will be lost but can be recreated from PMS XML imports
|
||||
|
||||
## Important Notes
|
||||
|
||||
### Conversions Table Data Loss
|
||||
|
||||
The `conversions` and `conversion_rooms` tables will be dropped and recreated with the new schema. This is intentional because:
|
||||
- The production version has a different schema
|
||||
- The data can be recreated by re-importing PMS XML files
|
||||
- This avoids complex data migration logic
|
||||
|
||||
If you need to preserve this data, modify the migration before running it.
|
||||
|
||||
### Future Migrations
|
||||
|
||||
In the future, when you need to change the database schema:
|
||||
|
||||
1. Modify the model classes in `db.py`
|
||||
2. Generate an Alembic migration: `uv run alembic revision --autogenerate -m "description"`
|
||||
3. Review the generated migration carefully
|
||||
4. Test it on a dev database first
|
||||
5. Apply it to production: `uv run alembic upgrade head`
|
||||
|
||||
## Configuration
|
||||
|
||||
The Alembic setup reads configuration from the same sources as the application:
|
||||
- `config.yaml` (via `annotatedyaml` with `secrets.yaml`)
|
||||
- Environment variables (`DATABASE_URL`, `DATABASE_SCHEMA`)
|
||||
|
||||
No additional configuration needed!
|
||||
148
alembic.ini
Normal file
148
alembic.ini
Normal file
@@ -0,0 +1,148 @@
|
||||
# A generic, single database configuration.
|
||||
|
||||
[alembic]
|
||||
# path to migration scripts.
|
||||
# this is typically a path given in POSIX (e.g. forward slashes)
|
||||
# format, relative to the token %(here)s which refers to the location of this
|
||||
# ini file
|
||||
script_location = %(here)s/alembic
|
||||
|
||||
# template used to generate migration file names; The default value is %%(rev)s_%%(slug)s
|
||||
# Uncomment the line below if you want the files to be prepended with date and time
|
||||
# see https://alembic.sqlalchemy.org/en/latest/tutorial.html#editing-the-ini-file
|
||||
# for all available tokens
|
||||
file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s
|
||||
|
||||
# sys.path path, will be prepended to sys.path if present.
|
||||
# defaults to the current working directory. for multiple paths, the path separator
|
||||
# is defined by "path_separator" below.
|
||||
prepend_sys_path = .
|
||||
|
||||
|
||||
# timezone to use when rendering the date within the migration file
|
||||
# as well as the filename.
|
||||
# If specified, requires the tzdata library which can be installed by adding
|
||||
# `alembic[tz]` to the pip requirements.
|
||||
# string value is passed to ZoneInfo()
|
||||
# leave blank for localtime
|
||||
# timezone =
|
||||
|
||||
# max length of characters to apply to the "slug" field
|
||||
# truncate_slug_length = 40
|
||||
|
||||
# set to 'true' to run the environment during
|
||||
# the 'revision' command, regardless of autogenerate
|
||||
# revision_environment = false
|
||||
|
||||
# set to 'true' to allow .pyc and .pyo files without
|
||||
# a source .py file to be detected as revisions in the
|
||||
# versions/ directory
|
||||
# sourceless = false
|
||||
|
||||
# version location specification; This defaults
|
||||
# to <script_location>/versions. When using multiple version
|
||||
# directories, initial revisions must be specified with --version-path.
|
||||
# The path separator used here should be the separator specified by "path_separator"
|
||||
# below.
|
||||
# version_locations = %(here)s/bar:%(here)s/bat:%(here)s/alembic/versions
|
||||
|
||||
# path_separator; This indicates what character is used to split lists of file
|
||||
# paths, including version_locations and prepend_sys_path within configparser
|
||||
# files such as alembic.ini.
|
||||
# The default rendered in new alembic.ini files is "os", which uses os.pathsep
|
||||
# to provide os-dependent path splitting.
|
||||
#
|
||||
# Note that in order to support legacy alembic.ini files, this default does NOT
|
||||
# take place if path_separator is not present in alembic.ini. If this
|
||||
# option is omitted entirely, fallback logic is as follows:
|
||||
#
|
||||
# 1. Parsing of the version_locations option falls back to using the legacy
|
||||
# "version_path_separator" key, which if absent then falls back to the legacy
|
||||
# behavior of splitting on spaces and/or commas.
|
||||
# 2. Parsing of the prepend_sys_path option falls back to the legacy
|
||||
# behavior of splitting on spaces, commas, or colons.
|
||||
#
|
||||
# Valid values for path_separator are:
|
||||
#
|
||||
# path_separator = :
|
||||
# path_separator = ;
|
||||
# path_separator = space
|
||||
# path_separator = newline
|
||||
#
|
||||
# Use os.pathsep. Default configuration used for new projects.
|
||||
path_separator = os
|
||||
|
||||
# set to 'true' to search source files recursively
|
||||
# in each "version_locations" directory
|
||||
# new in Alembic version 1.10
|
||||
# recursive_version_locations = false
|
||||
|
||||
# the output encoding used when revision files
|
||||
# are written from script.py.mako
|
||||
# output_encoding = utf-8
|
||||
|
||||
# database URL. This is consumed by the user-maintained env.py script only.
|
||||
# other means of configuring database URLs may be customized within the env.py
|
||||
# file. In this project, we get the URL from config.yaml or environment variables
|
||||
# so this is just a placeholder.
|
||||
# sqlalchemy.url = driver://user:pass@localhost/dbname
|
||||
|
||||
|
||||
[post_write_hooks]
|
||||
# post_write_hooks defines scripts or Python functions that are run
|
||||
# on newly generated revision scripts. See the documentation for further
|
||||
# detail and examples
|
||||
|
||||
# format using "black" - use the console_scripts runner, against the "black" entrypoint
|
||||
# hooks = black
|
||||
# black.type = console_scripts
|
||||
# black.entrypoint = black
|
||||
# black.options = -l 79 REVISION_SCRIPT_FILENAME
|
||||
|
||||
# lint with attempts to fix using "ruff" - use the module runner, against the "ruff" module
|
||||
# hooks = ruff
|
||||
# ruff.type = module
|
||||
# ruff.module = ruff
|
||||
# ruff.options = check --fix REVISION_SCRIPT_FILENAME
|
||||
|
||||
# Alternatively, use the exec runner to execute a binary found on your PATH
|
||||
# hooks = ruff
|
||||
# ruff.type = exec
|
||||
# ruff.executable = ruff
|
||||
# ruff.options = check --fix REVISION_SCRIPT_FILENAME
|
||||
|
||||
# Logging configuration. This is also consumed by the user-maintained
|
||||
# env.py script only.
|
||||
[loggers]
|
||||
keys = root,sqlalchemy,alembic
|
||||
|
||||
[handlers]
|
||||
keys = console
|
||||
|
||||
[formatters]
|
||||
keys = generic
|
||||
|
||||
[logger_root]
|
||||
level = WARNING
|
||||
handlers = console
|
||||
qualname =
|
||||
|
||||
[logger_sqlalchemy]
|
||||
level = WARNING
|
||||
handlers =
|
||||
qualname = sqlalchemy.engine
|
||||
|
||||
[logger_alembic]
|
||||
level = INFO
|
||||
handlers =
|
||||
qualname = alembic
|
||||
|
||||
[handler_console]
|
||||
class = StreamHandler
|
||||
args = (sys.stderr,)
|
||||
level = NOTSET
|
||||
formatter = generic
|
||||
|
||||
[formatter_generic]
|
||||
format = %(levelname)-5.5s [%(name)s] %(message)s
|
||||
datefmt = %H:%M:%S
|
||||
1
alembic/README
Normal file
1
alembic/README
Normal file
@@ -0,0 +1 @@
|
||||
Generic single-database configuration.
|
||||
123
alembic/README.md
Normal file
123
alembic/README.md
Normal file
@@ -0,0 +1,123 @@
|
||||
# Database Migrations
|
||||
|
||||
This directory contains Alembic database migrations for the Alpine Bits Python Server.
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Common Commands
|
||||
|
||||
```bash
|
||||
# Check current migration status
|
||||
uv run alembic current
|
||||
|
||||
# Show migration history
|
||||
uv run alembic history --verbose
|
||||
|
||||
# Upgrade to latest migration
|
||||
uv run alembic upgrade head
|
||||
|
||||
# Downgrade one version
|
||||
uv run alembic downgrade -1
|
||||
|
||||
# Create a new migration (auto-generate from model changes)
|
||||
uv run alembic revision --autogenerate -m "description"
|
||||
|
||||
# Create a new empty migration (manual)
|
||||
uv run alembic revision -m "description"
|
||||
```
|
||||
|
||||
## Migration Files
|
||||
|
||||
### Current Migrations
|
||||
|
||||
1. **535b70e85b64_initial_schema.py** - Creates all base tables
|
||||
2. **8edfc81558db_drop_and_recreate_conversions_tables.py** - Handles conversions table schema change
|
||||
|
||||
## How Migrations Work
|
||||
|
||||
1. Alembic tracks which migrations have been applied using the `alembic_version` table
|
||||
2. When you run `alembic upgrade head`, it applies all pending migrations in order
|
||||
3. Each migration has an `upgrade()` and `downgrade()` function
|
||||
4. Migrations are applied transactionally (all or nothing)
|
||||
|
||||
## Configuration
|
||||
|
||||
The Alembic environment ([env.py](env.py)) is configured to:
|
||||
- Read database URL from `config.yaml` or environment variables
|
||||
- Support PostgreSQL schemas
|
||||
- Use async SQLAlchemy (compatible with FastAPI)
|
||||
- Apply migrations in the correct schema
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always review auto-generated migrations** - Alembic's autogenerate is smart but not perfect
|
||||
2. **Test migrations on dev first** - Never run untested migrations on production
|
||||
3. **Keep migrations small** - One logical change per migration
|
||||
4. **Never edit applied migrations** - Create a new migration to fix issues
|
||||
5. **Commit migrations to git** - Migrations are part of your code
|
||||
|
||||
## Creating a New Migration
|
||||
|
||||
When you modify models in `src/alpine_bits_python/db.py`:
|
||||
|
||||
```bash
|
||||
# 1. Generate the migration
|
||||
uv run alembic revision --autogenerate -m "add_user_preferences_table"
|
||||
|
||||
# 2. Review the generated file in alembic/versions/
|
||||
# Look for:
|
||||
# - Incorrect type changes
|
||||
# - Missing indexes
|
||||
# - Data that needs to be migrated
|
||||
|
||||
# 3. Test it
|
||||
uv run alembic upgrade head
|
||||
|
||||
# 4. If there are issues, downgrade and fix:
|
||||
uv run alembic downgrade -1
|
||||
# Edit the migration file
|
||||
uv run alembic upgrade head
|
||||
|
||||
# 5. Commit the migration file to git
|
||||
git add alembic/versions/2025_*.py
|
||||
git commit -m "Add user preferences table migration"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "FAILED: Target database is not up to date"
|
||||
|
||||
This means pending migrations need to be applied:
|
||||
```bash
|
||||
uv run alembic upgrade head
|
||||
```
|
||||
|
||||
### "Can't locate revision identified by 'xxxxx'"
|
||||
|
||||
The alembic_version table may be out of sync. Check what's in the database:
|
||||
```bash
|
||||
# Connect to your database and run:
|
||||
SELECT * FROM alembic_version;
|
||||
```
|
||||
|
||||
### Migration conflicts after git merge
|
||||
|
||||
If two branches created migrations at the same time:
|
||||
```bash
|
||||
# Create a merge migration
|
||||
uv run alembic merge heads -m "merge branches"
|
||||
```
|
||||
|
||||
### Need to reset migrations (DANGEROUS - ONLY FOR DEV)
|
||||
|
||||
```bash
|
||||
# WARNING: This will delete all data!
|
||||
uv run alembic downgrade base # Removes all tables
|
||||
uv run alembic upgrade head # Recreates everything
|
||||
```
|
||||
|
||||
## More Information
|
||||
|
||||
- [Alembic Documentation](https://alembic.sqlalchemy.org/)
|
||||
- [Alembic Tutorial](https://alembic.sqlalchemy.org/en/latest/tutorial.html)
|
||||
- See [../MIGRATION_REFACTORING.md](../MIGRATION_REFACTORING.md) for details on how this project uses Alembic
|
||||
125
alembic/env.py
Normal file
125
alembic/env.py
Normal file
@@ -0,0 +1,125 @@
|
||||
"""Alembic environment configuration for async SQLAlchemy."""
|
||||
|
||||
import asyncio
|
||||
from logging.config import fileConfig
|
||||
|
||||
from alembic import context
|
||||
from sqlalchemy import pool
|
||||
from sqlalchemy.engine import Connection
|
||||
from sqlalchemy.ext.asyncio import async_engine_from_config
|
||||
|
||||
# Import your models' Base to enable autogenerate
|
||||
from alpine_bits_python.config_loader import load_config
|
||||
from alpine_bits_python.db import (
|
||||
Base,
|
||||
configure_schema,
|
||||
get_database_schema,
|
||||
get_database_url,
|
||||
)
|
||||
|
||||
# this is the Alembic Config object, which provides
|
||||
# access to the values within the .ini file in use.
|
||||
config = context.config
|
||||
|
||||
# Interpret the config file for Python logging.
|
||||
# This line sets up loggers basically.
|
||||
if config.config_file_name is not None:
|
||||
fileConfig(config.config_file_name)
|
||||
|
||||
# Load application config to get database URL and schema
|
||||
try:
|
||||
app_config = load_config()
|
||||
except (FileNotFoundError, KeyError, ValueError):
|
||||
# Fallback if config can't be loaded (e.g., during initial setup)
|
||||
app_config = {}
|
||||
|
||||
# Get database URL from application config
|
||||
db_url = get_database_url(app_config)
|
||||
if db_url:
|
||||
config.set_main_option("sqlalchemy.url", db_url)
|
||||
|
||||
# Get schema name from application config
|
||||
schema_name = get_database_schema(app_config)
|
||||
if schema_name:
|
||||
# Configure schema for all tables before migrations
|
||||
configure_schema(schema_name)
|
||||
|
||||
# add your model's MetaData object here for 'autogenerate' support
|
||||
target_metadata = Base.metadata
|
||||
|
||||
|
||||
def run_migrations_offline() -> None:
|
||||
"""Run migrations in 'offline' mode.
|
||||
|
||||
This configures the context with just a URL
|
||||
and not an Engine, though an Engine is acceptable
|
||||
here as well. By skipping the Engine creation
|
||||
we don't even need a DBAPI to be available.
|
||||
|
||||
Calls to context.execute() here emit the given string to the
|
||||
script output.
|
||||
"""
|
||||
url = config.get_main_option("sqlalchemy.url")
|
||||
context.configure(
|
||||
url=url,
|
||||
target_metadata=target_metadata,
|
||||
literal_binds=True,
|
||||
dialect_opts={"paramstyle": "named"},
|
||||
version_table_schema=schema_name, # Store alembic_version in our schema
|
||||
include_schemas=True,
|
||||
)
|
||||
|
||||
with context.begin_transaction():
|
||||
context.run_migrations()
|
||||
|
||||
|
||||
def do_run_migrations(connection: Connection) -> None:
|
||||
"""Run migrations with the given connection."""
|
||||
context.configure(
|
||||
connection=connection,
|
||||
target_metadata=target_metadata,
|
||||
version_table_schema=schema_name, # Store alembic_version in our schema
|
||||
include_schemas=True, # Allow Alembic to work with non-default schemas
|
||||
)
|
||||
|
||||
with context.begin_transaction():
|
||||
context.run_migrations()
|
||||
|
||||
|
||||
async def run_async_migrations() -> None:
|
||||
"""Run migrations in 'online' mode using async engine.
|
||||
|
||||
In this scenario we need to create an Engine
|
||||
and associate a connection with the context.
|
||||
"""
|
||||
# Get the config section for sqlalchemy settings
|
||||
configuration = config.get_section(config.config_ini_section, {})
|
||||
|
||||
# Add connect_args for PostgreSQL schema support if needed
|
||||
if schema_name and "postgresql" in configuration.get("sqlalchemy.url", ""):
|
||||
configuration["connect_args"] = {
|
||||
"server_settings": {"search_path": f"{schema_name},public"}
|
||||
}
|
||||
|
||||
# Create async engine
|
||||
connectable = async_engine_from_config(
|
||||
configuration,
|
||||
prefix="sqlalchemy.",
|
||||
poolclass=pool.NullPool,
|
||||
)
|
||||
|
||||
async with connectable.connect() as connection:
|
||||
await connection.run_sync(do_run_migrations)
|
||||
|
||||
await connectable.dispose()
|
||||
|
||||
|
||||
def run_migrations_online() -> None:
|
||||
"""Run migrations in 'online' mode - entry point."""
|
||||
asyncio.run(run_async_migrations())
|
||||
|
||||
|
||||
if context.is_offline_mode():
|
||||
run_migrations_offline()
|
||||
else:
|
||||
run_migrations_online()
|
||||
28
alembic/script.py.mako
Normal file
28
alembic/script.py.mako
Normal file
@@ -0,0 +1,28 @@
|
||||
"""${message}
|
||||
|
||||
Revision ID: ${up_revision}
|
||||
Revises: ${down_revision | comma,n}
|
||||
Create Date: ${create_date}
|
||||
|
||||
"""
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
${imports if imports else ""}
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = ${repr(up_revision)}
|
||||
down_revision: Union[str, Sequence[str], None] = ${repr(down_revision)}
|
||||
branch_labels: Union[str, Sequence[str], None] = ${repr(branch_labels)}
|
||||
depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)}
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
"""Upgrade schema."""
|
||||
${upgrades if upgrades else "pass"}
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
"""Downgrade schema."""
|
||||
${downgrades if downgrades else "pass"}
|
||||
@@ -0,0 +1,320 @@
|
||||
"""Baseline existing database.
|
||||
|
||||
This migration handles the transition from the old manual migration system
|
||||
to Alembic. It:
|
||||
1. Detects if the old conversions table schema exists and recreates it with the new schema
|
||||
2. Acts as a no-op for all other tables (assumes they already exist)
|
||||
|
||||
This allows existing databases to migrate to Alembic without data loss.
|
||||
|
||||
Revision ID: 94134e512a12
|
||||
Revises:
|
||||
Create Date: 2025-11-18 10:46:12.322570
|
||||
"""
|
||||
|
||||
from collections.abc import Sequence
|
||||
|
||||
import sqlalchemy as sa
|
||||
from alembic import op
|
||||
from sqlalchemy import inspect
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = "94134e512a12"
|
||||
down_revision: str | None = None
|
||||
branch_labels: str | Sequence[str] | None = None
|
||||
depends_on: str | Sequence[str] | None = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
"""Migrate existing database to Alembic management.
|
||||
|
||||
This migration:
|
||||
- Drops and recreates the conversions/conversion_rooms tables with new schema
|
||||
- Assumes all other tables already exist (no-op for them)
|
||||
"""
|
||||
conn = op.get_bind()
|
||||
inspector = inspect(conn)
|
||||
|
||||
# Get schema from alembic context (set in env.py from config)
|
||||
from alpine_bits_python.config_loader import load_config
|
||||
from alpine_bits_python.db import get_database_schema
|
||||
|
||||
try:
|
||||
app_config = load_config()
|
||||
schema = get_database_schema(app_config)
|
||||
except Exception:
|
||||
schema = None
|
||||
|
||||
print(f"Using schema: {schema or 'public (default)'}")
|
||||
|
||||
# Get tables from the correct schema
|
||||
existing_tables = set(inspector.get_table_names(schema=schema))
|
||||
|
||||
print(f"Found existing tables in schema '{schema}': {existing_tables}")
|
||||
|
||||
# Handle conversions table migration
|
||||
if "conversions" in existing_tables:
|
||||
columns = [
|
||||
col["name"] for col in inspector.get_columns("conversions", schema=schema)
|
||||
]
|
||||
|
||||
print(f"Columns in conversions table: {columns}")
|
||||
columns_set = set(columns)
|
||||
|
||||
print(f"DEBUG: Found columns in conversions table: {sorted(columns_set)}")
|
||||
|
||||
# Old schema indicators: these columns should NOT be in conversions anymore
|
||||
old_schema_columns = {
|
||||
"arrival_date",
|
||||
"departure_date",
|
||||
"room_status",
|
||||
"room_number",
|
||||
"sale_date",
|
||||
"revenue_total",
|
||||
"revenue_logis",
|
||||
"revenue_board",
|
||||
}
|
||||
|
||||
intersection = old_schema_columns & columns_set
|
||||
print(f"DEBUG: Old schema columns found: {intersection}")
|
||||
|
||||
# If ANY of the old denormalized columns exist, this is the old schema
|
||||
if intersection:
|
||||
# Old schema detected, drop and recreate
|
||||
print(
|
||||
f"Detected old conversions schema with denormalized room data: {old_schema_columns & columns_set}"
|
||||
)
|
||||
|
||||
# Drop conversion_rooms FIRST if it exists (due to foreign key constraint)
|
||||
if "conversion_rooms" in existing_tables:
|
||||
print("Dropping old conversion_rooms table...")
|
||||
op.execute(
|
||||
f"DROP TABLE IF EXISTS {schema}.conversion_rooms CASCADE"
|
||||
if schema
|
||||
else "DROP TABLE IF EXISTS conversion_rooms CASCADE"
|
||||
)
|
||||
|
||||
print("Dropping old conversions table...")
|
||||
op.execute(
|
||||
f"DROP TABLE IF EXISTS {schema}.conversions CASCADE"
|
||||
if schema
|
||||
else "DROP TABLE IF EXISTS conversions CASCADE"
|
||||
)
|
||||
|
||||
# Drop any orphaned indexes that may have survived the table drop
|
||||
print("Dropping any orphaned indexes...")
|
||||
index_names = [
|
||||
"ix_conversions_advertising_campagne",
|
||||
"ix_conversions_advertising_medium",
|
||||
"ix_conversions_advertising_partner",
|
||||
"ix_conversions_customer_id",
|
||||
"ix_conversions_guest_email",
|
||||
"ix_conversions_guest_first_name",
|
||||
"ix_conversions_guest_last_name",
|
||||
"ix_conversions_hashed_customer_id",
|
||||
"ix_conversions_hotel_id",
|
||||
"ix_conversions_pms_reservation_id",
|
||||
"ix_conversions_reservation_id",
|
||||
"ix_conversion_rooms_arrival_date",
|
||||
"ix_conversion_rooms_conversion_id",
|
||||
"ix_conversion_rooms_departure_date",
|
||||
"ix_conversion_rooms_pms_hotel_reservation_id",
|
||||
"ix_conversion_rooms_room_number",
|
||||
]
|
||||
for idx_name in index_names:
|
||||
op.execute(
|
||||
f"DROP INDEX IF EXISTS {schema}.{idx_name}" if schema else f"DROP INDEX IF EXISTS {idx_name}"
|
||||
)
|
||||
|
||||
print("Creating new conversions table with normalized schema...")
|
||||
create_conversions_table(schema)
|
||||
create_conversion_rooms_table(schema)
|
||||
else:
|
||||
print("Conversions table already has new schema, skipping migration")
|
||||
else:
|
||||
# No conversions table exists, create it
|
||||
print("No conversions table found, creating new schema...")
|
||||
create_conversions_table(schema)
|
||||
create_conversion_rooms_table(schema)
|
||||
|
||||
print("Baseline migration complete!")
|
||||
|
||||
|
||||
def create_conversions_table(schema=None):
|
||||
"""Create the conversions table with the new normalized schema."""
|
||||
op.create_table(
|
||||
"conversions",
|
||||
sa.Column("id", sa.Integer(), nullable=False),
|
||||
sa.Column("reservation_id", sa.Integer(), nullable=True),
|
||||
sa.Column("customer_id", sa.Integer(), nullable=True),
|
||||
sa.Column("hashed_customer_id", sa.Integer(), nullable=True),
|
||||
sa.Column("hotel_id", sa.String(), nullable=True),
|
||||
sa.Column("pms_reservation_id", sa.String(), nullable=True),
|
||||
sa.Column("reservation_number", sa.String(), nullable=True),
|
||||
sa.Column("reservation_date", sa.Date(), nullable=True),
|
||||
sa.Column("creation_time", sa.DateTime(timezone=True), nullable=True),
|
||||
sa.Column("reservation_type", sa.String(), nullable=True),
|
||||
sa.Column("booking_channel", sa.String(), nullable=True),
|
||||
sa.Column("guest_first_name", sa.String(), nullable=True),
|
||||
sa.Column("guest_last_name", sa.String(), nullable=True),
|
||||
sa.Column("guest_email", sa.String(), nullable=True),
|
||||
sa.Column("guest_country_code", sa.String(), nullable=True),
|
||||
sa.Column("advertising_medium", sa.String(), nullable=True),
|
||||
sa.Column("advertising_partner", sa.String(), nullable=True),
|
||||
sa.Column("advertising_campagne", sa.String(), nullable=True),
|
||||
sa.Column("created_at", sa.DateTime(timezone=True), nullable=True),
|
||||
sa.Column("updated_at", sa.DateTime(timezone=True), nullable=True),
|
||||
sa.ForeignKeyConstraint(
|
||||
["customer_id"],
|
||||
["customers.id"],
|
||||
),
|
||||
sa.ForeignKeyConstraint(
|
||||
["hashed_customer_id"],
|
||||
["hashed_customers.id"],
|
||||
),
|
||||
sa.ForeignKeyConstraint(
|
||||
["reservation_id"],
|
||||
["reservations.id"],
|
||||
),
|
||||
sa.PrimaryKeyConstraint("id"),
|
||||
schema=schema,
|
||||
)
|
||||
|
||||
# Create indexes
|
||||
op.create_index(
|
||||
op.f("ix_conversions_advertising_campagne"),
|
||||
"conversions",
|
||||
["advertising_campagne"],
|
||||
unique=False,
|
||||
schema=schema,
|
||||
)
|
||||
op.create_index(
|
||||
op.f("ix_conversions_advertising_medium"),
|
||||
"conversions",
|
||||
["advertising_medium"],
|
||||
unique=False,
|
||||
schema=schema,
|
||||
)
|
||||
op.create_index(
|
||||
op.f("ix_conversions_advertising_partner"),
|
||||
"conversions",
|
||||
["advertising_partner"],
|
||||
unique=False,
|
||||
schema=schema,
|
||||
)
|
||||
op.create_index(
|
||||
op.f("ix_conversions_customer_id"), "conversions", ["customer_id"], unique=False
|
||||
)
|
||||
op.create_index(
|
||||
op.f("ix_conversions_guest_email"), "conversions", ["guest_email"], unique=False
|
||||
)
|
||||
op.create_index(
|
||||
op.f("ix_conversions_guest_first_name"),
|
||||
"conversions",
|
||||
["guest_first_name"],
|
||||
unique=False,
|
||||
schema=schema,
|
||||
)
|
||||
op.create_index(
|
||||
op.f("ix_conversions_guest_last_name"),
|
||||
"conversions",
|
||||
["guest_last_name"],
|
||||
unique=False,
|
||||
schema=schema,
|
||||
)
|
||||
op.create_index(
|
||||
op.f("ix_conversions_hashed_customer_id"),
|
||||
"conversions",
|
||||
["hashed_customer_id"],
|
||||
unique=False,
|
||||
schema=schema,
|
||||
)
|
||||
op.create_index(
|
||||
op.f("ix_conversions_hotel_id"), "conversions", ["hotel_id"], unique=False
|
||||
)
|
||||
op.create_index(
|
||||
op.f("ix_conversions_pms_reservation_id"),
|
||||
"conversions",
|
||||
["pms_reservation_id"],
|
||||
unique=False,
|
||||
schema=schema,
|
||||
)
|
||||
op.create_index(
|
||||
op.f("ix_conversions_reservation_id"),
|
||||
"conversions",
|
||||
["reservation_id"],
|
||||
unique=False,
|
||||
schema=schema,
|
||||
)
|
||||
|
||||
|
||||
def create_conversion_rooms_table(schema=None):
|
||||
"""Create the conversion_rooms table with the new normalized schema."""
|
||||
op.create_table(
|
||||
"conversion_rooms",
|
||||
sa.Column("id", sa.Integer(), nullable=False),
|
||||
sa.Column("conversion_id", sa.Integer(), nullable=False),
|
||||
sa.Column("pms_hotel_reservation_id", sa.String(), nullable=True),
|
||||
sa.Column("arrival_date", sa.Date(), nullable=True),
|
||||
sa.Column("departure_date", sa.Date(), nullable=True),
|
||||
sa.Column("room_status", sa.String(), nullable=True),
|
||||
sa.Column("room_type", sa.String(), nullable=True),
|
||||
sa.Column("room_number", sa.String(), nullable=True),
|
||||
sa.Column("num_adults", sa.Integer(), nullable=True),
|
||||
sa.Column("rate_plan_code", sa.String(), nullable=True),
|
||||
sa.Column("connected_room_type", sa.String(), nullable=True),
|
||||
sa.Column("daily_sales", sa.JSON(), nullable=True),
|
||||
sa.Column("total_revenue", sa.String(), nullable=True),
|
||||
sa.Column("created_at", sa.DateTime(timezone=True), nullable=True),
|
||||
sa.Column("updated_at", sa.DateTime(timezone=True), nullable=True),
|
||||
sa.ForeignKeyConstraint(
|
||||
["conversion_id"], ["conversions.id"], ondelete="CASCADE"
|
||||
),
|
||||
sa.PrimaryKeyConstraint("id"),
|
||||
schema=schema,
|
||||
)
|
||||
|
||||
# Create indexes
|
||||
op.create_index(
|
||||
op.f("ix_conversion_rooms_arrival_date"),
|
||||
"conversion_rooms",
|
||||
["arrival_date"],
|
||||
unique=False,
|
||||
schema=schema,
|
||||
)
|
||||
op.create_index(
|
||||
op.f("ix_conversion_rooms_conversion_id"),
|
||||
"conversion_rooms",
|
||||
["conversion_id"],
|
||||
unique=False,
|
||||
schema=schema,
|
||||
)
|
||||
op.create_index(
|
||||
op.f("ix_conversion_rooms_departure_date"),
|
||||
"conversion_rooms",
|
||||
["departure_date"],
|
||||
unique=False,
|
||||
schema=schema,
|
||||
)
|
||||
op.create_index(
|
||||
op.f("ix_conversion_rooms_pms_hotel_reservation_id"),
|
||||
"conversion_rooms",
|
||||
["pms_hotel_reservation_id"],
|
||||
unique=False,
|
||||
schema=schema,
|
||||
)
|
||||
op.create_index(
|
||||
op.f("ix_conversion_rooms_room_number"),
|
||||
"conversion_rooms",
|
||||
["room_number"],
|
||||
unique=False,
|
||||
schema=schema,
|
||||
)
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
"""Downgrade not supported.
|
||||
|
||||
This baseline migration drops data (old conversions schema) that can be
|
||||
recreated from PMS XML imports. Reverting would require re-importing.
|
||||
"""
|
||||
@@ -1453713,3 +1453713,309 @@ WHERE alpinebits.conversions.pms_reservation_id = $1::VARCHAR]
|
||||
2025-11-17 21:14:04 - alpine_bits_python.alpine_bits_helpers - WARNING - invalid email address: silviadallapiazza@gmail.con -> The domain name gmail.con does not exist.
|
||||
2025-11-17 21:14:05 - alpine_bits_python.alpine_bits_helpers - WARNING - invalid email address: valentina_giannini92@hormail.it -> The domain name hormail.it does not exist.
|
||||
2025-11-17 21:14:05 - alpine_bits_python.alpine_bits_helpers - WARNING - invalid email address: sara.trovarelli@gmail.con -> The domain name gmail.con does not exist.
|
||||
2025-11-18 09:52:37 - alpine_bits_python.migrations - ERROR - Migration failed: (sqlalchemy.dialects.postgresql.asyncpg.Error) <class 'asyncpg.exceptions.DependentObjectsStillExistError'>: cannot drop table conversions because other objects depend on it
|
||||
DETAIL: constraint conversion_rooms_conversion_id_fkey on table conversion_rooms depends on table conversions
|
||||
HINT: Use DROP ... CASCADE to drop the dependent objects too.
|
||||
[SQL: DROP TABLE IF EXISTS conversions]
|
||||
(Background on this error at: https://sqlalche.me/e/20/dbapi)
|
||||
Traceback (most recent call last):
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 545, in _prepare_and_execute
|
||||
self._rows = deque(await prepared_stmt.fetch(*parameters))
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/asyncpg/prepared_stmt.py", line 176, in fetch
|
||||
data = await self.__bind_execute(args, 0, timeout)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/asyncpg/prepared_stmt.py", line 267, in __bind_execute
|
||||
data, status, _ = await self.__do_execute(
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
lambda protocol: protocol.bind_execute(
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
self._state, args, '', limit, True, timeout))
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/asyncpg/prepared_stmt.py", line 256, in __do_execute
|
||||
return await executor(protocol)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
File "asyncpg/protocol/protocol.pyx", line 206, in bind_execute
|
||||
asyncpg.exceptions.DependentObjectsStillExistError: cannot drop table conversions because other objects depend on it
|
||||
DETAIL: constraint conversion_rooms_conversion_id_fkey on table conversion_rooms depends on table conversions
|
||||
HINT: Use DROP ... CASCADE to drop the dependent objects too.
|
||||
|
||||
The above exception was the direct cause of the following exception:
|
||||
|
||||
Traceback (most recent call last):
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/engine/base.py", line 1967, in _exec_single_context
|
||||
self.dialect.do_execute(
|
||||
~~~~~~~~~~~~~~~~~~~~~~~^
|
||||
cursor, str_statement, effective_parameters, context
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
)
|
||||
^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/engine/default.py", line 951, in do_execute
|
||||
cursor.execute(statement, parameters)
|
||||
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 580, in execute
|
||||
self._adapt_connection.await_(
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
|
||||
self._prepare_and_execute(operation, parameters)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
)
|
||||
^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 132, in await_only
|
||||
return current.parent.switch(awaitable) # type: ignore[no-any-return,attr-defined] # noqa: E501
|
||||
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 196, in greenlet_spawn
|
||||
value = await result
|
||||
^^^^^^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 558, in _prepare_and_execute
|
||||
self._handle_exception(error)
|
||||
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 508, in _handle_exception
|
||||
self._adapt_connection._handle_exception(error)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 792, in _handle_exception
|
||||
raise translated_error from error
|
||||
sqlalchemy.dialects.postgresql.asyncpg.AsyncAdapt_asyncpg_dbapi.Error: <class 'asyncpg.exceptions.DependentObjectsStillExistError'>: cannot drop table conversions because other objects depend on it
|
||||
DETAIL: constraint conversion_rooms_conversion_id_fkey on table conversion_rooms depends on table conversions
|
||||
HINT: Use DROP ... CASCADE to drop the dependent objects too.
|
||||
|
||||
The above exception was the direct cause of the following exception:
|
||||
|
||||
Traceback (most recent call last):
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/src/alpine_bits_python/migrations.py", line 504, in run_all_migrations
|
||||
await migrate_normalize_conversions(engine)
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/src/alpine_bits_python/migrations.py", line 462, in migrate_normalize_conversions
|
||||
await drop_table(engine, "conversions")
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/src/alpine_bits_python/migrations.py", line 383, in drop_table
|
||||
await conn.execute(text(f"DROP TABLE IF EXISTS {table_name}"))
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/ext/asyncio/engine.py", line 658, in execute
|
||||
result = await greenlet_spawn(
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
...<5 lines>...
|
||||
)
|
||||
^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 201, in greenlet_spawn
|
||||
result = context.throw(*sys.exc_info())
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/engine/base.py", line 1419, in execute
|
||||
return meth(
|
||||
self,
|
||||
distilled_parameters,
|
||||
execution_options or NO_OPTIONS,
|
||||
)
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/sql/elements.py", line 526, in _execute_on_connection
|
||||
return connection._execute_clauseelement(
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
|
||||
self, distilled_params, execution_options
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
)
|
||||
^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/engine/base.py", line 1641, in _execute_clauseelement
|
||||
ret = self._execute_context(
|
||||
dialect,
|
||||
...<8 lines>...
|
||||
cache_hit=cache_hit,
|
||||
)
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/engine/base.py", line 1846, in _execute_context
|
||||
return self._exec_single_context(
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~^
|
||||
dialect, context, statement, parameters
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
)
|
||||
^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/engine/base.py", line 1986, in _exec_single_context
|
||||
self._handle_dbapi_exception(
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
|
||||
e, str_statement, effective_parameters, cursor, context
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
)
|
||||
^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/engine/base.py", line 2355, in _handle_dbapi_exception
|
||||
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/engine/base.py", line 1967, in _exec_single_context
|
||||
self.dialect.do_execute(
|
||||
~~~~~~~~~~~~~~~~~~~~~~~^
|
||||
cursor, str_statement, effective_parameters, context
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
)
|
||||
^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/engine/default.py", line 951, in do_execute
|
||||
cursor.execute(statement, parameters)
|
||||
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 580, in execute
|
||||
self._adapt_connection.await_(
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
|
||||
self._prepare_and_execute(operation, parameters)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
)
|
||||
^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 132, in await_only
|
||||
return current.parent.switch(awaitable) # type: ignore[no-any-return,attr-defined] # noqa: E501
|
||||
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 196, in greenlet_spawn
|
||||
value = await result
|
||||
^^^^^^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 558, in _prepare_and_execute
|
||||
self._handle_exception(error)
|
||||
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 508, in _handle_exception
|
||||
self._adapt_connection._handle_exception(error)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 792, in _handle_exception
|
||||
raise translated_error from error
|
||||
sqlalchemy.exc.DBAPIError: (sqlalchemy.dialects.postgresql.asyncpg.Error) <class 'asyncpg.exceptions.DependentObjectsStillExistError'>: cannot drop table conversions because other objects depend on it
|
||||
DETAIL: constraint conversion_rooms_conversion_id_fkey on table conversion_rooms depends on table conversions
|
||||
HINT: Use DROP ... CASCADE to drop the dependent objects too.
|
||||
[SQL: DROP TABLE IF EXISTS conversions]
|
||||
(Background on this error at: https://sqlalche.me/e/20/dbapi)
|
||||
2025-11-18 09:57:25 - alpine_bits_python.migrations - ERROR - Migration failed: (sqlalchemy.dialects.postgresql.asyncpg.Error) <class 'asyncpg.exceptions.DependentObjectsStillExistError'>: cannot drop table conversions because other objects depend on it
|
||||
DETAIL: constraint conversion_rooms_conversion_id_fkey on table conversion_rooms depends on table conversions
|
||||
HINT: Use DROP ... CASCADE to drop the dependent objects too.
|
||||
[SQL: DROP TABLE IF EXISTS conversions]
|
||||
(Background on this error at: https://sqlalche.me/e/20/dbapi)
|
||||
Traceback (most recent call last):
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 545, in _prepare_and_execute
|
||||
self._rows = deque(await prepared_stmt.fetch(*parameters))
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/asyncpg/prepared_stmt.py", line 176, in fetch
|
||||
data = await self.__bind_execute(args, 0, timeout)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/asyncpg/prepared_stmt.py", line 267, in __bind_execute
|
||||
data, status, _ = await self.__do_execute(
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
lambda protocol: protocol.bind_execute(
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
self._state, args, '', limit, True, timeout))
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/asyncpg/prepared_stmt.py", line 256, in __do_execute
|
||||
return await executor(protocol)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
File "asyncpg/protocol/protocol.pyx", line 206, in bind_execute
|
||||
asyncpg.exceptions.DependentObjectsStillExistError: cannot drop table conversions because other objects depend on it
|
||||
DETAIL: constraint conversion_rooms_conversion_id_fkey on table conversion_rooms depends on table conversions
|
||||
HINT: Use DROP ... CASCADE to drop the dependent objects too.
|
||||
|
||||
The above exception was the direct cause of the following exception:
|
||||
|
||||
Traceback (most recent call last):
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/engine/base.py", line 1967, in _exec_single_context
|
||||
self.dialect.do_execute(
|
||||
~~~~~~~~~~~~~~~~~~~~~~~^
|
||||
cursor, str_statement, effective_parameters, context
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
)
|
||||
^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/engine/default.py", line 951, in do_execute
|
||||
cursor.execute(statement, parameters)
|
||||
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 580, in execute
|
||||
self._adapt_connection.await_(
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
|
||||
self._prepare_and_execute(operation, parameters)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
)
|
||||
^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 132, in await_only
|
||||
return current.parent.switch(awaitable) # type: ignore[no-any-return,attr-defined] # noqa: E501
|
||||
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 196, in greenlet_spawn
|
||||
value = await result
|
||||
^^^^^^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 558, in _prepare_and_execute
|
||||
self._handle_exception(error)
|
||||
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 508, in _handle_exception
|
||||
self._adapt_connection._handle_exception(error)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 792, in _handle_exception
|
||||
raise translated_error from error
|
||||
sqlalchemy.dialects.postgresql.asyncpg.AsyncAdapt_asyncpg_dbapi.Error: <class 'asyncpg.exceptions.DependentObjectsStillExistError'>: cannot drop table conversions because other objects depend on it
|
||||
DETAIL: constraint conversion_rooms_conversion_id_fkey on table conversion_rooms depends on table conversions
|
||||
HINT: Use DROP ... CASCADE to drop the dependent objects too.
|
||||
|
||||
The above exception was the direct cause of the following exception:
|
||||
|
||||
Traceback (most recent call last):
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/src/alpine_bits_python/migrations.py", line 504, in run_all_migrations
|
||||
await migrate_normalize_conversions(engine)
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/src/alpine_bits_python/migrations.py", line 462, in migrate_normalize_conversions
|
||||
await drop_table(engine, "conversions")
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/src/alpine_bits_python/migrations.py", line 383, in drop_table
|
||||
await conn.execute(text(f"DROP TABLE IF EXISTS {table_name}"))
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/ext/asyncio/engine.py", line 658, in execute
|
||||
result = await greenlet_spawn(
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
...<5 lines>...
|
||||
)
|
||||
^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 201, in greenlet_spawn
|
||||
result = context.throw(*sys.exc_info())
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/engine/base.py", line 1419, in execute
|
||||
return meth(
|
||||
self,
|
||||
distilled_parameters,
|
||||
execution_options or NO_OPTIONS,
|
||||
)
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/sql/elements.py", line 526, in _execute_on_connection
|
||||
return connection._execute_clauseelement(
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
|
||||
self, distilled_params, execution_options
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
)
|
||||
^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/engine/base.py", line 1641, in _execute_clauseelement
|
||||
ret = self._execute_context(
|
||||
dialect,
|
||||
...<8 lines>...
|
||||
cache_hit=cache_hit,
|
||||
)
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/engine/base.py", line 1846, in _execute_context
|
||||
return self._exec_single_context(
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~^
|
||||
dialect, context, statement, parameters
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
)
|
||||
^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/engine/base.py", line 1986, in _exec_single_context
|
||||
self._handle_dbapi_exception(
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
|
||||
e, str_statement, effective_parameters, cursor, context
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
)
|
||||
^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/engine/base.py", line 2355, in _handle_dbapi_exception
|
||||
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/engine/base.py", line 1967, in _exec_single_context
|
||||
self.dialect.do_execute(
|
||||
~~~~~~~~~~~~~~~~~~~~~~~^
|
||||
cursor, str_statement, effective_parameters, context
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
)
|
||||
^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/engine/default.py", line 951, in do_execute
|
||||
cursor.execute(statement, parameters)
|
||||
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 580, in execute
|
||||
self._adapt_connection.await_(
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
|
||||
self._prepare_and_execute(operation, parameters)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
)
|
||||
^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 132, in await_only
|
||||
return current.parent.switch(awaitable) # type: ignore[no-any-return,attr-defined] # noqa: E501
|
||||
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 196, in greenlet_spawn
|
||||
value = await result
|
||||
^^^^^^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 558, in _prepare_and_execute
|
||||
self._handle_exception(error)
|
||||
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 508, in _handle_exception
|
||||
self._adapt_connection._handle_exception(error)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
|
||||
File "/home/divusjulius/repos/alpine_bits_python_server/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 792, in _handle_exception
|
||||
raise translated_error from error
|
||||
sqlalchemy.exc.DBAPIError: (sqlalchemy.dialects.postgresql.asyncpg.Error) <class 'asyncpg.exceptions.DependentObjectsStillExistError'>: cannot drop table conversions because other objects depend on it
|
||||
DETAIL: constraint conversion_rooms_conversion_id_fkey on table conversion_rooms depends on table conversions
|
||||
HINT: Use DROP ... CASCADE to drop the dependent objects too.
|
||||
[SQL: DROP TABLE IF EXISTS conversions]
|
||||
(Background on this error at: https://sqlalche.me/e/20/dbapi)
|
||||
|
||||
@@ -10,6 +10,7 @@ readme = "README.md"
|
||||
requires-python = ">=3.13"
|
||||
dependencies = [
|
||||
"aiosqlite>=0.21.0",
|
||||
"alembic>=1.17.2",
|
||||
"annotatedyaml>=1.0.0",
|
||||
"asyncpg>=0.30.0",
|
||||
"dotenv>=0.9.9",
|
||||
|
||||
47
reset_database.sh
Normal file
47
reset_database.sh
Normal file
@@ -0,0 +1,47 @@
|
||||
#!/bin/bash
|
||||
# Reset database and initialize Alembic from scratch
|
||||
|
||||
echo "=== Database Reset Script ==="
|
||||
echo "This will drop all tables and reinitialize with Alembic"
|
||||
echo ""
|
||||
read -p "Are you sure? (type 'yes' to continue): " confirm
|
||||
|
||||
if [ "$confirm" != "yes" ]; then
|
||||
echo "Aborted."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Step 1: Dropping all tables in the database..."
|
||||
echo "Connect to your database and run:"
|
||||
echo ""
|
||||
echo " -- For PostgreSQL:"
|
||||
echo " DROP SCHEMA public CASCADE;"
|
||||
echo " CREATE SCHEMA public;"
|
||||
echo " GRANT ALL ON SCHEMA public TO <your_user>;"
|
||||
echo " GRANT ALL ON SCHEMA public TO public;"
|
||||
echo ""
|
||||
echo " -- Or if using a custom schema (e.g., alpinebits):"
|
||||
echo " DROP SCHEMA alpinebits CASCADE;"
|
||||
echo " CREATE SCHEMA alpinebits;"
|
||||
echo ""
|
||||
echo "Press Enter after you've run the SQL commands..."
|
||||
read
|
||||
|
||||
echo ""
|
||||
echo "Step 2: Running Alembic migrations..."
|
||||
uv run alembic upgrade head
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo ""
|
||||
echo "=== Success! ==="
|
||||
echo "Database has been reset and migrations applied."
|
||||
echo ""
|
||||
echo "Current migration status:"
|
||||
uv run alembic current
|
||||
else
|
||||
echo ""
|
||||
echo "=== Error ==="
|
||||
echo "Migration failed. Check the error messages above."
|
||||
exit 1
|
||||
fi
|
||||
@@ -44,13 +44,13 @@ from .const import CONF_GOOGLE_ACCOUNT, CONF_HOTEL_ID, CONF_META_ACCOUNT, HttpSt
|
||||
from .conversion_service import ConversionService
|
||||
from .csv_import import CSVImporter
|
||||
from .customer_service import CustomerService
|
||||
from .db import Base, ResilientAsyncSession, SessionMaker, create_database_engine
|
||||
from .db import ResilientAsyncSession, SessionMaker, create_database_engine
|
||||
from .db import Customer as DBCustomer
|
||||
from .db import Reservation as DBReservation
|
||||
from .db_setup import run_startup_tasks
|
||||
from .email_monitoring import ReservationStatsCollector
|
||||
from .email_service import create_email_service
|
||||
from .logging_config import get_logger, setup_logging
|
||||
from .migrations import run_all_migrations
|
||||
from .pushover_service import create_pushover_service
|
||||
from .rate_limit import (
|
||||
BURST_RATE_LIMIT,
|
||||
@@ -331,31 +331,15 @@ async def lifespan(app: FastAPI):
|
||||
elif hotel_id and not push_endpoint:
|
||||
_LOGGER.info("Hotel %s has no push_endpoint configured", hotel_id)
|
||||
|
||||
# Create tables first (all workers)
|
||||
# This ensures tables exist before migrations try to alter them
|
||||
async with engine.begin() as conn:
|
||||
await conn.run_sync(Base.metadata.create_all)
|
||||
_LOGGER.info("Database tables checked/created at startup.")
|
||||
|
||||
# Run migrations after tables exist (only primary worker for race conditions)
|
||||
# Run startup tasks (only in primary worker to avoid race conditions)
|
||||
# NOTE: Database migrations should already have been run before the app started
|
||||
# via run_migrations.py or `uv run alembic upgrade head`
|
||||
if is_primary:
|
||||
await run_all_migrations(engine, config)
|
||||
_LOGGER.info("Running startup tasks (primary worker)...")
|
||||
await run_startup_tasks(AsyncSessionLocal, config)
|
||||
_LOGGER.info("Startup tasks completed")
|
||||
else:
|
||||
_LOGGER.info("Skipping migrations (non-primary worker)")
|
||||
|
||||
# Hash any existing customers (only in primary worker to avoid race conditions)
|
||||
if is_primary:
|
||||
async with AsyncSessionLocal() as session:
|
||||
customer_service = CustomerService(session)
|
||||
hashed_count = await customer_service.hash_existing_customers()
|
||||
if hashed_count > 0:
|
||||
_LOGGER.info(
|
||||
"Backfilled hashed data for %d existing customers", hashed_count
|
||||
)
|
||||
else:
|
||||
_LOGGER.info("All existing customers already have hashed data")
|
||||
else:
|
||||
_LOGGER.info("Skipping customer hashing (non-primary worker)")
|
||||
_LOGGER.info("Skipping startup tasks (non-primary worker)")
|
||||
|
||||
# Initialize and hook up stats collector for daily reports
|
||||
# Note: report_scheduler will only exist on the primary worker
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
"""Service for handling conversion data from hotel PMS XML files."""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import xml.etree.ElementTree as ET
|
||||
from datetime import datetime
|
||||
from decimal import Decimal
|
||||
@@ -11,7 +10,14 @@ from sqlalchemy import or_, select
|
||||
from sqlalchemy.ext.asyncio import AsyncSession
|
||||
from sqlalchemy.orm import selectinload
|
||||
|
||||
from .db import Conversion, RoomReservation, Customer, HashedCustomer, Reservation, SessionMaker
|
||||
from .db import (
|
||||
Conversion,
|
||||
ConversionRoom,
|
||||
Customer,
|
||||
HashedCustomer,
|
||||
Reservation,
|
||||
SessionMaker,
|
||||
)
|
||||
from .logging_config import get_logger
|
||||
|
||||
_LOGGER = get_logger(__name__)
|
||||
@@ -45,17 +51,23 @@ class ConversionService:
|
||||
# Cache for reservation and customer data within a single XML processing run
|
||||
# Maps hotel_code -> list of (reservation, customer) tuples
|
||||
# This significantly speeds up matching when processing large XML files
|
||||
self._reservation_cache: dict[str | None, list[tuple[Reservation, Customer | None]]] = {}
|
||||
self._reservation_cache: dict[
|
||||
str | None, list[tuple[Reservation, Customer | None]]
|
||||
] = {}
|
||||
self._cache_initialized = False
|
||||
|
||||
if isinstance(session, SessionMaker):
|
||||
self.session_maker = session
|
||||
self.supports_concurrent = True
|
||||
_LOGGER.info("ConversionService initialized in concurrent mode with SessionMaker")
|
||||
_LOGGER.info(
|
||||
"ConversionService initialized in concurrent mode with SessionMaker"
|
||||
)
|
||||
elif isinstance(session, AsyncSession):
|
||||
self.session = session
|
||||
self.supports_concurrent = False
|
||||
_LOGGER.info("ConversionService initialized in sequential mode with single session")
|
||||
_LOGGER.info(
|
||||
"ConversionService initialized in sequential mode with single session"
|
||||
)
|
||||
elif session is not None:
|
||||
raise TypeError(
|
||||
f"session must be AsyncSession or SessionMaker, got {type(session)}"
|
||||
@@ -202,9 +214,7 @@ class ConversionService:
|
||||
async with asyncio.TaskGroup() as tg:
|
||||
for reservation in reservations:
|
||||
tg.create_task(
|
||||
self._process_reservation_safe(
|
||||
reservation, semaphore, stats
|
||||
)
|
||||
self._process_reservation_safe(reservation, semaphore, stats)
|
||||
)
|
||||
|
||||
async def _process_reservations_concurrent(
|
||||
@@ -227,9 +237,7 @@ class ConversionService:
|
||||
async with asyncio.TaskGroup() as tg:
|
||||
for reservation in reservations:
|
||||
tg.create_task(
|
||||
self._process_reservation_safe(
|
||||
reservation, semaphore, stats
|
||||
)
|
||||
self._process_reservation_safe(reservation, semaphore, stats)
|
||||
)
|
||||
|
||||
async def _process_reservation_safe(
|
||||
@@ -247,6 +255,7 @@ class ConversionService:
|
||||
reservation_elem: XML element for the reservation
|
||||
semaphore: Semaphore to limit concurrent operations
|
||||
stats: Shared stats dictionary (thread-safe due to GIL)
|
||||
|
||||
"""
|
||||
pms_reservation_id = reservation_elem.get("id")
|
||||
|
||||
@@ -295,18 +304,19 @@ class ConversionService:
|
||||
if self.session_maker:
|
||||
await session.close()
|
||||
|
||||
async def _handle_deleted_reservation(self, pms_reservation_id: str, session: AsyncSession):
|
||||
async def _handle_deleted_reservation(
|
||||
self, pms_reservation_id: str, session: AsyncSession
|
||||
):
|
||||
"""Handle deleted reservation by marking conversions as deleted or removing them.
|
||||
|
||||
Args:
|
||||
pms_reservation_id: PMS reservation ID to delete
|
||||
session: AsyncSession to use for the operation
|
||||
|
||||
"""
|
||||
# For now, we'll just log it. You could add a 'deleted' flag to the Conversion table
|
||||
# or actually delete the conversion records
|
||||
_LOGGER.info(
|
||||
"Processing deleted reservation: PMS ID %s", pms_reservation_id
|
||||
)
|
||||
_LOGGER.info("Processing deleted reservation: PMS ID %s", pms_reservation_id)
|
||||
|
||||
# Option 1: Delete conversion records
|
||||
result = await session.execute(
|
||||
@@ -337,6 +347,7 @@ class ConversionService:
|
||||
In concurrent mode, each task passes its own session.
|
||||
|
||||
Returns statistics about what was matched.
|
||||
|
||||
"""
|
||||
if session is None:
|
||||
session = self.session
|
||||
@@ -394,9 +405,7 @@ class ConversionService:
|
||||
creation_time_str.replace("Z", "+00:00")
|
||||
)
|
||||
except ValueError:
|
||||
_LOGGER.warning(
|
||||
"Invalid creation time format: %s", creation_time_str
|
||||
)
|
||||
_LOGGER.warning("Invalid creation time format: %s", creation_time_str)
|
||||
|
||||
# Find matching reservation, customer, and hashed_customer using advertising data and guest details
|
||||
matched_reservation = None
|
||||
@@ -515,18 +524,15 @@ class ConversionService:
|
||||
|
||||
# Batch-load existing room reservations to avoid N+1 queries
|
||||
room_numbers = [
|
||||
rm.get("roomNumber")
|
||||
for rm in room_reservations.findall("roomReservation")
|
||||
rm.get("roomNumber") for rm in room_reservations.findall("roomReservation")
|
||||
]
|
||||
pms_hotel_reservation_ids = [
|
||||
f"{pms_reservation_id}_{room_num}" for room_num in room_numbers
|
||||
]
|
||||
|
||||
existing_rooms_result = await session.execute(
|
||||
select(RoomReservation).where(
|
||||
RoomReservation.pms_hotel_reservation_id.in_(
|
||||
pms_hotel_reservation_ids
|
||||
)
|
||||
select(ConversionRoom).where(
|
||||
ConversionRoom.pms_hotel_reservation_id.in_(pms_hotel_reservation_ids)
|
||||
)
|
||||
)
|
||||
existing_rooms = {
|
||||
@@ -556,9 +562,7 @@ class ConversionService:
|
||||
departure_date = None
|
||||
if departure_str:
|
||||
try:
|
||||
departure_date = datetime.strptime(
|
||||
departure_str, "%Y-%m-%d"
|
||||
).date()
|
||||
departure_date = datetime.strptime(departure_str, "%Y-%m-%d").date()
|
||||
except ValueError:
|
||||
_LOGGER.warning("Invalid departure date format: %s", departure_str)
|
||||
|
||||
@@ -576,7 +580,7 @@ class ConversionService:
|
||||
# Process daily sales and extract total revenue
|
||||
daily_sales_elem = room_reservation.find("dailySales")
|
||||
daily_sales_list = []
|
||||
total_revenue = Decimal("0")
|
||||
total_revenue = Decimal(0)
|
||||
|
||||
if daily_sales_elem is not None:
|
||||
for daily_sale in daily_sales_elem.findall("dailySale"):
|
||||
@@ -642,7 +646,7 @@ class ConversionService:
|
||||
)
|
||||
else:
|
||||
# Create new room reservation
|
||||
room_reservation_record = RoomReservation(
|
||||
room_reservation_record = ConversionRoom(
|
||||
conversion_id=conversion.id,
|
||||
pms_hotel_reservation_id=pms_hotel_reservation_id,
|
||||
arrival_date=arrival_date,
|
||||
@@ -734,7 +738,9 @@ class ConversionService:
|
||||
)
|
||||
|
||||
# Strategy 2: If no advertising match, try email/name-based matching
|
||||
if not result["reservation"] and (guest_email or guest_first_name or guest_last_name):
|
||||
if not result["reservation"] and (
|
||||
guest_email or guest_first_name or guest_last_name
|
||||
):
|
||||
matched_reservation = await self._match_by_guest_details(
|
||||
hotel_id, guest_first_name, guest_last_name, guest_email, session
|
||||
)
|
||||
@@ -798,6 +804,7 @@ class ConversionService:
|
||||
|
||||
Returns:
|
||||
Matched Reservation or None
|
||||
|
||||
"""
|
||||
if session is None:
|
||||
session = self.session
|
||||
@@ -882,6 +889,7 @@ class ConversionService:
|
||||
|
||||
Returns:
|
||||
Matched Reservation or None
|
||||
|
||||
"""
|
||||
if session is None:
|
||||
session = self.session
|
||||
@@ -892,9 +900,7 @@ class ConversionService:
|
||||
|
||||
# Get reservations from cache for this hotel
|
||||
if hotel_id and hotel_id in self._reservation_cache:
|
||||
all_reservations = [
|
||||
res for res, _ in self._reservation_cache[hotel_id]
|
||||
]
|
||||
all_reservations = [res for res, _ in self._reservation_cache[hotel_id]]
|
||||
elif not hotel_id:
|
||||
# If no hotel_id specified, use all cached reservations
|
||||
for reservations_list in self._reservation_cache.values():
|
||||
@@ -947,6 +953,7 @@ class ConversionService:
|
||||
|
||||
Returns:
|
||||
Matched Reservation or None
|
||||
|
||||
"""
|
||||
# Filter by guest details
|
||||
candidates = []
|
||||
@@ -1019,6 +1026,7 @@ class ConversionService:
|
||||
|
||||
Returns:
|
||||
Single best-match Reservation, or None if no good match found
|
||||
|
||||
"""
|
||||
candidates = reservations
|
||||
|
||||
|
||||
86
src/alpine_bits_python/db_setup.py
Normal file
86
src/alpine_bits_python/db_setup.py
Normal file
@@ -0,0 +1,86 @@
|
||||
"""Database setup and initialization.
|
||||
|
||||
This module handles all database setup tasks that should run once at startup,
|
||||
before the application starts accepting requests. It includes:
|
||||
- Schema migrations via Alembic
|
||||
- One-time data cleanup/backfill tasks (e.g., hashing existing customers)
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
from typing import Any
|
||||
|
||||
from sqlalchemy.ext.asyncio import AsyncEngine, async_sessionmaker
|
||||
|
||||
from .customer_service import CustomerService
|
||||
from .db import create_database_engine
|
||||
from .logging_config import get_logger
|
||||
|
||||
_LOGGER = get_logger(__name__)
|
||||
|
||||
|
||||
async def setup_database(config: dict[str, Any] | None = None) -> tuple[AsyncEngine, async_sessionmaker]:
|
||||
"""Set up the database and prepare for application use.
|
||||
|
||||
This function should be called once at application startup, after
|
||||
migrations have been run but before the app starts accepting requests. It:
|
||||
1. Creates the async engine
|
||||
2. Creates the sessionmaker
|
||||
3. Performs one-time startup tasks (e.g., hashing existing customers)
|
||||
|
||||
NOTE: Database migrations should be run BEFORE calling this function,
|
||||
typically using `uv run alembic upgrade head` or via run_migrations.py.
|
||||
|
||||
Args:
|
||||
config: Application configuration dictionary
|
||||
|
||||
Returns:
|
||||
Tuple of (engine, async_sessionmaker) for use in the application
|
||||
|
||||
Raises:
|
||||
Any database-related exceptions that occur during setup
|
||||
"""
|
||||
_LOGGER.info("Starting database setup...")
|
||||
|
||||
# Create database engine
|
||||
engine = create_database_engine(config=config, echo=False)
|
||||
|
||||
try:
|
||||
# Create sessionmaker for the application to use
|
||||
AsyncSessionLocal = async_sessionmaker(engine, expire_on_commit=False)
|
||||
|
||||
# Perform startup tasks (NOT migrations)
|
||||
_LOGGER.info("Running startup tasks...")
|
||||
await run_startup_tasks(AsyncSessionLocal, config)
|
||||
_LOGGER.info("Startup tasks completed successfully")
|
||||
|
||||
_LOGGER.info("Database setup completed successfully")
|
||||
return engine, AsyncSessionLocal
|
||||
|
||||
except Exception as e:
|
||||
_LOGGER.exception("Database setup failed: %s", e)
|
||||
await engine.dispose()
|
||||
raise
|
||||
|
||||
|
||||
async def run_startup_tasks(
|
||||
sessionmaker: async_sessionmaker, config: dict[str, Any] | None = None
|
||||
) -> None:
|
||||
"""Run one-time startup tasks.
|
||||
|
||||
These are tasks that need to run at startup but are NOT schema migrations.
|
||||
Examples: data backfills, hashing existing records, etc.
|
||||
|
||||
Args:
|
||||
sessionmaker: SQLAlchemy async sessionmaker
|
||||
config: Application configuration dictionary
|
||||
"""
|
||||
# Hash any existing customers that don't have hashed data
|
||||
async with sessionmaker() as session:
|
||||
customer_service = CustomerService(session)
|
||||
hashed_count = await customer_service.hash_existing_customers()
|
||||
if hashed_count > 0:
|
||||
_LOGGER.info(
|
||||
"Backfilled hashed data for %d existing customers", hashed_count
|
||||
)
|
||||
else:
|
||||
_LOGGER.info("All existing customers already have hashed data")
|
||||
@@ -11,12 +11,13 @@ from sqlalchemy.ext.asyncio import AsyncEngine
|
||||
|
||||
from .const import CONF_GOOGLE_ACCOUNT, CONF_HOTEL_ID, CONF_META_ACCOUNT
|
||||
from .logging_config import get_logger
|
||||
from .db import Reservation
|
||||
|
||||
_LOGGER = get_logger(__name__)
|
||||
|
||||
|
||||
async def check_column_exists(engine: AsyncEngine, table_name: str, column_name: str) -> bool:
|
||||
async def check_column_exists(
|
||||
engine: AsyncEngine, table_name: str, column_name: str
|
||||
) -> bool:
|
||||
"""Check if a column exists in a table.
|
||||
|
||||
Args:
|
||||
@@ -26,11 +27,13 @@ async def check_column_exists(engine: AsyncEngine, table_name: str, column_name:
|
||||
|
||||
Returns:
|
||||
True if column exists, False otherwise
|
||||
|
||||
"""
|
||||
async with engine.connect() as conn:
|
||||
|
||||
def _check(connection):
|
||||
inspector = inspect(connection)
|
||||
columns = [col['name'] for col in inspector.get_columns(table_name)]
|
||||
columns = [col["name"] for col in inspector.get_columns(table_name)]
|
||||
return column_name in columns
|
||||
|
||||
result = await conn.run_sync(_check)
|
||||
@@ -38,10 +41,7 @@ async def check_column_exists(engine: AsyncEngine, table_name: str, column_name:
|
||||
|
||||
|
||||
async def add_column_if_not_exists(
|
||||
engine: AsyncEngine,
|
||||
table_name: str,
|
||||
column_name: str,
|
||||
column_type: str = "VARCHAR"
|
||||
engine: AsyncEngine, table_name: str, column_name: str, column_type: str = "VARCHAR"
|
||||
) -> bool:
|
||||
"""Add a column to a table if it doesn't already exist.
|
||||
|
||||
@@ -53,6 +53,7 @@ async def add_column_if_not_exists(
|
||||
|
||||
Returns:
|
||||
True if column was added, False if it already existed
|
||||
|
||||
"""
|
||||
exists = await check_column_exists(engine, table_name, column_name)
|
||||
|
||||
@@ -85,10 +86,14 @@ async def migrate_add_room_types(engine: AsyncEngine) -> None:
|
||||
added_count = 0
|
||||
|
||||
# Add each column if it doesn't exist
|
||||
if await add_column_if_not_exists(engine, "reservations", "room_type_code", "VARCHAR"):
|
||||
if await add_column_if_not_exists(
|
||||
engine, "reservations", "room_type_code", "VARCHAR"
|
||||
):
|
||||
added_count += 1
|
||||
|
||||
if await add_column_if_not_exists(engine, "reservations", "room_classification_code", "VARCHAR"):
|
||||
if await add_column_if_not_exists(
|
||||
engine, "reservations", "room_classification_code", "VARCHAR"
|
||||
):
|
||||
added_count += 1
|
||||
|
||||
if await add_column_if_not_exists(engine, "reservations", "room_type", "VARCHAR"):
|
||||
@@ -100,7 +105,9 @@ async def migrate_add_room_types(engine: AsyncEngine) -> None:
|
||||
_LOGGER.info("Migration add_room_types: No changes needed (already applied)")
|
||||
|
||||
|
||||
async def migrate_add_advertising_account_ids(engine: AsyncEngine, config: dict[str, Any] | None = None) -> None:
|
||||
async def migrate_add_advertising_account_ids(
|
||||
engine: AsyncEngine, config: dict[str, Any] | None = None
|
||||
) -> None:
|
||||
"""Migration: Add advertising account ID fields to reservations table.
|
||||
|
||||
This migration adds two optional fields:
|
||||
@@ -114,20 +121,27 @@ async def migrate_add_advertising_account_ids(engine: AsyncEngine, config: dict[
|
||||
Args:
|
||||
engine: SQLAlchemy async engine
|
||||
config: Application configuration dict containing hotel account IDs
|
||||
|
||||
"""
|
||||
_LOGGER.info("Running migration: add_advertising_account_ids")
|
||||
|
||||
added_count = 0
|
||||
|
||||
# Add each column if it doesn't exist
|
||||
if await add_column_if_not_exists(engine, "reservations", "meta_account_id", "VARCHAR"):
|
||||
if await add_column_if_not_exists(
|
||||
engine, "reservations", "meta_account_id", "VARCHAR"
|
||||
):
|
||||
added_count += 1
|
||||
|
||||
if await add_column_if_not_exists(engine, "reservations", "google_account_id", "VARCHAR"):
|
||||
if await add_column_if_not_exists(
|
||||
engine, "reservations", "google_account_id", "VARCHAR"
|
||||
):
|
||||
added_count += 1
|
||||
|
||||
if added_count > 0:
|
||||
_LOGGER.info("Migration add_advertising_account_ids: Added %d columns", added_count)
|
||||
_LOGGER.info(
|
||||
"Migration add_advertising_account_ids: Added %d columns", added_count
|
||||
)
|
||||
else:
|
||||
_LOGGER.info("Migration add_advertising_account_ids: Columns already exist")
|
||||
|
||||
@@ -135,10 +149,14 @@ async def migrate_add_advertising_account_ids(engine: AsyncEngine, config: dict[
|
||||
if config:
|
||||
await _backfill_advertising_account_ids(engine, config)
|
||||
else:
|
||||
_LOGGER.warning("No config provided, skipping backfill of advertising account IDs")
|
||||
_LOGGER.warning(
|
||||
"No config provided, skipping backfill of advertising account IDs"
|
||||
)
|
||||
|
||||
|
||||
async def _backfill_advertising_account_ids(engine: AsyncEngine, config: dict[str, Any]) -> None:
|
||||
async def _backfill_advertising_account_ids(
|
||||
engine: AsyncEngine, config: dict[str, Any]
|
||||
) -> None:
|
||||
"""Backfill advertising account IDs for existing reservations.
|
||||
|
||||
Updates existing reservations to populate meta_account_id and google_account_id
|
||||
@@ -149,6 +167,7 @@ async def _backfill_advertising_account_ids(engine: AsyncEngine, config: dict[st
|
||||
Args:
|
||||
engine: SQLAlchemy async engine
|
||||
config: Application configuration dict
|
||||
|
||||
"""
|
||||
_LOGGER.info("Backfilling advertising account IDs for existing reservations...")
|
||||
|
||||
@@ -164,7 +183,7 @@ async def _backfill_advertising_account_ids(engine: AsyncEngine, config: dict[st
|
||||
if hotel_id:
|
||||
hotel_accounts[hotel_id] = {
|
||||
"meta_account": meta_account,
|
||||
"google_account": google_account
|
||||
"google_account": google_account,
|
||||
}
|
||||
|
||||
if not hotel_accounts:
|
||||
@@ -188,11 +207,15 @@ async def _backfill_advertising_account_ids(engine: AsyncEngine, config: dict[st
|
||||
)
|
||||
result = await conn.execute(
|
||||
sql,
|
||||
{"meta_account": accounts["meta_account"], "hotel_id": hotel_id}
|
||||
{"meta_account": accounts["meta_account"], "hotel_id": hotel_id},
|
||||
)
|
||||
count = result.rowcount
|
||||
if count > 0:
|
||||
_LOGGER.info("Updated %d reservations with meta_account_id for hotel %s", count, hotel_id)
|
||||
_LOGGER.info(
|
||||
"Updated %d reservations with meta_account_id for hotel %s",
|
||||
count,
|
||||
hotel_id,
|
||||
)
|
||||
meta_updated += count
|
||||
|
||||
# Update reservations with google_account_id where gclid is present
|
||||
@@ -210,21 +233,30 @@ async def _backfill_advertising_account_ids(engine: AsyncEngine, config: dict[st
|
||||
)
|
||||
result = await conn.execute(
|
||||
sql,
|
||||
{"google_account": accounts["google_account"], "hotel_id": hotel_id}
|
||||
{
|
||||
"google_account": accounts["google_account"],
|
||||
"hotel_id": hotel_id,
|
||||
},
|
||||
)
|
||||
count = result.rowcount
|
||||
if count > 0:
|
||||
_LOGGER.info("Updated %d reservations with google_account_id for hotel %s", count, hotel_id)
|
||||
_LOGGER.info(
|
||||
"Updated %d reservations with google_account_id for hotel %s",
|
||||
count,
|
||||
hotel_id,
|
||||
)
|
||||
google_updated += count
|
||||
|
||||
_LOGGER.info(
|
||||
"Backfill complete: %d reservations updated with meta_account_id, %d with google_account_id",
|
||||
meta_updated,
|
||||
google_updated
|
||||
google_updated,
|
||||
)
|
||||
|
||||
|
||||
async def migrate_add_username_to_acked_requests(engine: AsyncEngine, config: dict[str, Any] | None = None) -> None:
|
||||
async def migrate_add_username_to_acked_requests(
|
||||
engine: AsyncEngine, config: dict[str, Any] | None = None
|
||||
) -> None:
|
||||
"""Migration: Add username column to acked_requests table and backfill with hotel usernames.
|
||||
|
||||
This migration adds a username column to acked_requests to track acknowledgements by username
|
||||
@@ -238,6 +270,7 @@ async def migrate_add_username_to_acked_requests(engine: AsyncEngine, config: di
|
||||
Args:
|
||||
engine: SQLAlchemy async engine
|
||||
config: Application configuration dict containing hotel usernames
|
||||
|
||||
"""
|
||||
_LOGGER.info("Running migration: add_username_to_acked_requests")
|
||||
|
||||
@@ -252,10 +285,14 @@ async def migrate_add_username_to_acked_requests(engine: AsyncEngine, config: di
|
||||
if config:
|
||||
await _backfill_acked_requests_username(engine, config)
|
||||
else:
|
||||
_LOGGER.warning("No config provided, skipping backfill of acked_requests usernames")
|
||||
_LOGGER.warning(
|
||||
"No config provided, skipping backfill of acked_requests usernames"
|
||||
)
|
||||
|
||||
|
||||
async def _backfill_acked_requests_username(engine: AsyncEngine, config: dict[str, Any]) -> None:
|
||||
async def _backfill_acked_requests_username(
|
||||
engine: AsyncEngine, config: dict[str, Any]
|
||||
) -> None:
|
||||
"""Backfill username for existing acked_requests records.
|
||||
|
||||
For each acknowledgement, find the corresponding reservation to determine its hotel_code,
|
||||
@@ -264,6 +301,7 @@ async def _backfill_acked_requests_username(engine: AsyncEngine, config: dict[st
|
||||
Args:
|
||||
engine: SQLAlchemy async engine
|
||||
config: Application configuration dict
|
||||
|
||||
"""
|
||||
_LOGGER.info("Backfilling usernames for existing acked_requests...")
|
||||
|
||||
@@ -297,15 +335,53 @@ async def _backfill_acked_requests_username(engine: AsyncEngine, config: dict[st
|
||||
AND username IS NULL
|
||||
""")
|
||||
result = await conn.execute(
|
||||
sql,
|
||||
{"username": username, "hotel_id": hotel_id}
|
||||
sql, {"username": username, "hotel_id": hotel_id}
|
||||
)
|
||||
count = result.rowcount
|
||||
if count > 0:
|
||||
_LOGGER.info("Updated %d acknowledgements with username for hotel %s", count, hotel_id)
|
||||
_LOGGER.info(
|
||||
"Updated %d acknowledgements with username for hotel %s",
|
||||
count,
|
||||
hotel_id,
|
||||
)
|
||||
total_updated += count
|
||||
|
||||
_LOGGER.info("Backfill complete: %d acknowledgements updated with username", total_updated)
|
||||
_LOGGER.info(
|
||||
"Backfill complete: %d acknowledgements updated with username", total_updated
|
||||
)
|
||||
|
||||
|
||||
async def table_exists(engine: AsyncEngine, table_name: str) -> bool:
|
||||
"""Check if a table exists in the database.
|
||||
|
||||
Args:
|
||||
engine: SQLAlchemy async engine
|
||||
table_name: Name of the table to check
|
||||
|
||||
Returns:
|
||||
True if table exists, False otherwise
|
||||
|
||||
"""
|
||||
async with engine.connect() as conn:
|
||||
|
||||
def _check(connection):
|
||||
inspector = inspect(connection)
|
||||
return table_name in inspector.get_table_names()
|
||||
|
||||
return await conn.run_sync(_check)
|
||||
|
||||
|
||||
async def drop_table(engine: AsyncEngine, table_name: str) -> None:
|
||||
"""Drop a table from the database.
|
||||
|
||||
Args:
|
||||
engine: SQLAlchemy async engine
|
||||
table_name: Name of the table to drop
|
||||
|
||||
"""
|
||||
async with engine.begin() as conn:
|
||||
await conn.execute(text(f"DROP TABLE IF EXISTS {table_name}"))
|
||||
_LOGGER.info("Dropped table: %s", table_name)
|
||||
|
||||
|
||||
async def migrate_normalize_conversions(engine: AsyncEngine) -> None:
|
||||
@@ -313,7 +389,7 @@ async def migrate_normalize_conversions(engine: AsyncEngine) -> None:
|
||||
|
||||
This migration redesigns the conversion data structure:
|
||||
- conversions: One row per PMS reservation (with guest/advertising metadata)
|
||||
- room_reservations: One row per room reservation (linked to conversion)
|
||||
- conversion_rooms: One row per room reservation (linked to conversion)
|
||||
- daily_sales: JSON array of daily sales within each room reservation
|
||||
- total_revenue: Extracted sum of all daily sales for efficiency
|
||||
|
||||
@@ -326,20 +402,88 @@ async def migrate_normalize_conversions(engine: AsyncEngine) -> None:
|
||||
- Efficient querying via extracted total_revenue field
|
||||
- All daily sales details preserved in JSON for analysis
|
||||
|
||||
The tables are created via Base.metadata.create_all() at startup.
|
||||
The new tables are created via Base.metadata.create_all() at startup.
|
||||
This migration handles cleanup of old schema versions.
|
||||
|
||||
Safe to run multiple times - idempotent.
|
||||
"""
|
||||
_LOGGER.info("Running migration: normalize_conversions")
|
||||
|
||||
# Check if the old conversions table exists with the old schema
|
||||
# If the table exists but doesn't match our current schema definition, drop it
|
||||
old_conversions_exists = await table_exists(engine, "conversions")
|
||||
|
||||
if old_conversions_exists:
|
||||
# Check if this is the old-style table (we'll look for unexpected columns)
|
||||
# The old table would not have the new structure we've defined
|
||||
async with engine.connect() as conn:
|
||||
|
||||
def _get_columns(connection):
|
||||
inspector = inspect(connection)
|
||||
return [col["name"] for col in inspector.get_columns("conversions")]
|
||||
|
||||
old_columns = await conn.run_sync(_get_columns)
|
||||
|
||||
# Expected columns in the new schema (defined in db.py)
|
||||
# If the table is missing key columns from our schema, it's the old version
|
||||
expected_columns = {
|
||||
"id",
|
||||
"reservation_id",
|
||||
"customer_id",
|
||||
"hashed_customer_id",
|
||||
"hotel_id",
|
||||
"pms_reservation_id",
|
||||
"reservation_number",
|
||||
"reservation_date",
|
||||
"creation_time",
|
||||
"reservation_type",
|
||||
"booking_channel",
|
||||
"guest_first_name",
|
||||
"guest_last_name",
|
||||
"guest_email",
|
||||
"guest_country_code",
|
||||
"advertising_medium",
|
||||
"advertising_partner",
|
||||
"advertising_campagne",
|
||||
"created_at",
|
||||
"updated_at",
|
||||
}
|
||||
|
||||
old_columns_set = set(old_columns)
|
||||
|
||||
# If we're missing critical new columns, this is the old schema
|
||||
if not expected_columns.issubset(old_columns_set):
|
||||
_LOGGER.info(
|
||||
"Found old conversions table with incompatible schema. "
|
||||
"Old columns: %s. Expected new columns: %s",
|
||||
old_columns,
|
||||
expected_columns,
|
||||
)
|
||||
await drop_table(engine, "conversions")
|
||||
_LOGGER.info(
|
||||
"Dropped old conversions table to allow creation of new schema"
|
||||
)
|
||||
else:
|
||||
_LOGGER.info(
|
||||
"Conversions table exists with compatible schema, no migration needed"
|
||||
)
|
||||
|
||||
# Check for the old conversion_rooms table (which should not exist in the new schema)
|
||||
old_conversion_rooms_exists = await table_exists(engine, "conversion_rooms")
|
||||
if old_conversion_rooms_exists:
|
||||
await drop_table(engine, "conversion_rooms")
|
||||
_LOGGER.info("Dropped old conversion_rooms table")
|
||||
|
||||
_LOGGER.info(
|
||||
"Conversion data structure redesigned: "
|
||||
"conversions (1 per PMS reservation) + "
|
||||
"room_reservations (1 per room, daily_sales as JSON). "
|
||||
"Tables created/updated via Base.metadata.create_all()"
|
||||
"Migration normalize_conversions: Conversion data structure normalized. "
|
||||
"New tables (conversions + conversion_rooms) will be created/updated via "
|
||||
"Base.metadata.create_all()"
|
||||
)
|
||||
|
||||
|
||||
async def run_all_migrations(engine: AsyncEngine, config: dict[str, Any] | None = None) -> None:
|
||||
async def run_all_migrations(
|
||||
engine: AsyncEngine, config: dict[str, Any] | None = None
|
||||
) -> None:
|
||||
"""Run all pending migrations.
|
||||
|
||||
This function should be called at app startup, after Base.metadata.create_all.
|
||||
@@ -348,6 +492,7 @@ async def run_all_migrations(engine: AsyncEngine, config: dict[str, Any] | None
|
||||
Args:
|
||||
engine: SQLAlchemy async engine
|
||||
config: Application configuration dict (optional, but required for some migrations)
|
||||
|
||||
"""
|
||||
_LOGGER.info("Starting database migrations...")
|
||||
|
||||
|
||||
@@ -1,15 +1,34 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Startup script for the Wix Form Handler API."""
|
||||
"""Startup script for the Wix Form Handler API.
|
||||
|
||||
import os
|
||||
This script:
|
||||
1. Runs database migrations using Alembic
|
||||
2. Starts the FastAPI application with uvicorn
|
||||
|
||||
Database migrations are run BEFORE starting the server to ensure the schema
|
||||
is up to date. This approach works well with multiple workers since migrations
|
||||
complete before any worker starts processing requests.
|
||||
"""
|
||||
|
||||
import sys
|
||||
|
||||
import uvicorn
|
||||
|
||||
if __name__ == "__main__":
|
||||
# db_path = "alpinebits.db" # Adjust path if needed
|
||||
# if os.path.exists(db_path):
|
||||
# os.remove(db_path)
|
||||
from alpine_bits_python.run_migrations import run_migrations
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Run database migrations before starting the server
|
||||
# This ensures the schema is up to date before any workers start
|
||||
print("Running database migrations...")
|
||||
try:
|
||||
run_migrations()
|
||||
print("Database migrations completed successfully")
|
||||
except Exception as e:
|
||||
print(f"Failed to run migrations: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Start the API server
|
||||
print("Starting API server...")
|
||||
uvicorn.run(
|
||||
"alpine_bits_python.api:app",
|
||||
host="0.0.0.0",
|
||||
|
||||
74
src/alpine_bits_python/run_migrations.py
Normal file
74
src/alpine_bits_python/run_migrations.py
Normal file
@@ -0,0 +1,74 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Run database migrations using Alembic.
|
||||
|
||||
This script should be run before starting the application to ensure
|
||||
the database schema is up to date. It can be run standalone or called
|
||||
from run_api.py before starting uvicorn.
|
||||
|
||||
Usage:
|
||||
uv run python -m alpine_bits_python.run_migrations
|
||||
or
|
||||
from alpine_bits_python.run_migrations import run_migrations
|
||||
run_migrations()
|
||||
"""
|
||||
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
from .logging_config import get_logger
|
||||
|
||||
_LOGGER = get_logger(__name__)
|
||||
|
||||
|
||||
def run_migrations() -> None:
|
||||
"""Run Alembic migrations to upgrade database to latest schema.
|
||||
|
||||
This function runs 'alembic upgrade head' to apply all pending migrations.
|
||||
It will exit the process if migrations fail.
|
||||
|
||||
Raises:
|
||||
SystemExit: If migrations fail
|
||||
"""
|
||||
_LOGGER.info("Running database migrations...")
|
||||
|
||||
# Get the project root directory (where alembic.ini is located)
|
||||
# Assuming this file is in src/alpine_bits_python/
|
||||
project_root = Path(__file__).parent.parent.parent
|
||||
|
||||
try:
|
||||
# Run alembic upgrade head
|
||||
result = subprocess.run(
|
||||
["alembic", "upgrade", "head"],
|
||||
cwd=project_root,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=True,
|
||||
)
|
||||
|
||||
_LOGGER.info("Database migrations completed successfully")
|
||||
_LOGGER.debug("Migration output: %s", result.stdout)
|
||||
|
||||
except subprocess.CalledProcessError as e:
|
||||
_LOGGER.error("Failed to run database migrations:")
|
||||
_LOGGER.error("Exit code: %d", e.returncode)
|
||||
_LOGGER.error("stdout: %s", e.stdout)
|
||||
_LOGGER.error("stderr: %s", e.stderr)
|
||||
sys.exit(1)
|
||||
except FileNotFoundError:
|
||||
_LOGGER.error(
|
||||
"Alembic not found. Please ensure it's installed: uv pip install alembic"
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Configure basic logging if run directly
|
||||
import logging
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
|
||||
)
|
||||
|
||||
run_migrations()
|
||||
print("Migrations completed successfully!")
|
||||
28
uv.lock
generated
28
uv.lock
generated
@@ -14,12 +14,27 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/f5/10/6c25ed6de94c49f88a91fa5018cb4c0f3625f31d5be9f771ebe5cc7cd506/aiosqlite-0.21.0-py3-none-any.whl", hash = "sha256:2549cf4057f95f53dcba16f2b64e8e2791d7e1adedb13197dd8ed77bb226d7d0", size = 15792, upload-time = "2025-02-03T07:30:13.6Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "alembic"
|
||||
version = "1.17.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "mako" },
|
||||
{ name = "sqlalchemy" },
|
||||
{ name = "typing-extensions" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/02/a6/74c8cadc2882977d80ad756a13857857dbcf9bd405bc80b662eb10651282/alembic-1.17.2.tar.gz", hash = "sha256:bbe9751705c5e0f14877f02d46c53d10885e377e3d90eda810a016f9baa19e8e", size = 1988064, upload-time = "2025-11-14T20:35:04.057Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/ba/88/6237e97e3385b57b5f1528647addea5cc03d4d65d5979ab24327d41fb00d/alembic-1.17.2-py3-none-any.whl", hash = "sha256:f483dd1fe93f6c5d49217055e4d15b905b425b6af906746abb35b69c1996c4e6", size = 248554, upload-time = "2025-11-14T20:35:05.699Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "alpine-bits-python-server"
|
||||
version = "0.1.2"
|
||||
source = { editable = "." }
|
||||
dependencies = [
|
||||
{ name = "aiosqlite" },
|
||||
{ name = "alembic" },
|
||||
{ name = "annotatedyaml" },
|
||||
{ name = "asyncpg" },
|
||||
{ name = "dotenv" },
|
||||
@@ -51,6 +66,7 @@ dev = [
|
||||
[package.metadata]
|
||||
requires-dist = [
|
||||
{ name = "aiosqlite", specifier = ">=0.21.0" },
|
||||
{ name = "alembic", specifier = ">=1.17.2" },
|
||||
{ name = "annotatedyaml", specifier = ">=1.0.0" },
|
||||
{ name = "asyncpg", specifier = ">=0.30.0" },
|
||||
{ name = "dotenv", specifier = ">=0.9.9" },
|
||||
@@ -585,6 +601,18 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/0a/44/9613f300201b8700215856e5edd056d4e58dd23368699196b58877d4408b/lxml-6.0.1-cp314-cp314-win_arm64.whl", hash = "sha256:2834377b0145a471a654d699bdb3a2155312de492142ef5a1d426af2c60a0a31", size = 3753901, upload-time = "2025-08-22T10:34:45.799Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "mako"
|
||||
version = "1.3.10"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "markupsafe" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/9e/38/bd5b78a920a64d708fe6bc8e0a2c075e1389d53bef8413725c63ba041535/mako-1.3.10.tar.gz", hash = "sha256:99579a6f39583fa7e5630a28c3c1f440e4e97a414b80372649c0ce338da2ea28", size = 392474, upload-time = "2025-04-10T12:44:31.16Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/87/fb/99f81ac72ae23375f22b7afdb7642aba97c00a713c217124420147681a2f/mako-1.3.10-py3-none-any.whl", hash = "sha256:baef24a52fc4fc514a0887ac600f9f1cff3d82c61d4d700a1fa84d597b88db59", size = 78509, upload-time = "2025-04-10T12:50:53.297Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "markupsafe"
|
||||
version = "3.0.2"
|
||||
|
||||
Reference in New Issue
Block a user