Deployment: hbf-data-manager
Infrastructure config for this service. Full platform deployment:
docs/architecture/deployment.md
Runtime
- Port: 3000 (default; overridden by
PORTenv var) - Base image (build):
node:latest - Base image (runtime):
node:latest - Start command:
npm run start:prod - Health check:
GET /health— returns{"status":"ok","timestamp":"<ISO>","uptimeSeconds":<N>}
Required Environment Variables
| Variable | Example | Description |
|---|---|---|
| PORT | 3000 | HTTP listen port |
| NODE_ENV | production | Runtime environment |
| CORE_URL | http://hbf-core:8080 | hbf-core base URL |
| CORE_TOKEN | secret | Core API auth token |
| MYSQL_HOST | mysql | MySQL hostname |
| MYSQL_PORT | 3306 | MySQL port |
| MYSQL_USER | hbf | MySQL username |
| MYSQL_PASSWORD | secret | MySQL password |
| MYSQL_DB | hbf_data_manager | MySQL database name |
| DB_SYNCHRONIZE | false | TypeORM schema sync (disable in prod) |
| TYPEORM_AUTORUN_MIGRATIONS | true | Run migrations on startup |
| KAFKA_BROKERS | kafka:9092 | Comma-separated Kafka broker list |
| KAFKA_CLIENT_ID | hbf-data-manager | Kafka client identifier |
| KAFKA_GROUP_ID | hbf-data-manager | Kafka consumer group ID |
| KAFKA_TOPICS | topic1,topic2 | Comma-separated topics to subscribe |
| PINO_LOGGER_USE | true | Enable Pino logger |
| PINO_LOG_LEVEL | info | Pino log level |
Kafka Security (optional — required for SSL/SASL brokers)
| Variable | Example | Description |
|---|---|---|
| KAFKA_SSL | true | Not used in code. SSL is derived from KAFKA_SECURITY_PROTOCOL |
| KAFKA_SECURITY_PROTOCOL | SASL_SSL | Security protocol |
| KAFKA_SASL_MECHANISM | PLAIN | Not used in code. SASL mechanism is hardcoded as 'plain' |
| KAFKA_SASL_USERNAME | user | SASL username |
| KAFKA_SASL_PASSWORD | secret | SASL password |
| KAFKAJS_NO_PARTITIONER_WARNING | 1 | Suppress KafkaJS partitioner warning |
Docker
# Build (requires GITHUB_TOKEN for @helvia npm registry)
docker build --build-arg GITHUB_TOKEN=<token> -t hbf-data-manager .
# Run
docker run -p 3000:3000 --env-file .env hbf-data-manager
Multi-stage build: stage 1 (build) compiles TypeScript; stage 2 (runtime) copies compiled dist/ and node_modules.
docker-compose (standalone dev stack)
The bundled docker-compose.yml brings up four services:
| Service | Image | Ports | Notes |
|---|---|---|---|
| app | hbf-data-manager (built) | 3000:3000 | Depends on db + kafka healthy |
| db | mysql:8.4 | 3306:3306 | Dev credentials from env or defaults |
| kafka | confluentinc/cp-kafka:7.6.1 | 9092:9092, 29092:29092 | KRaft mode (no ZooKeeper), cluster ID fixed |
| control-center | confluentinc/cp-enterprise-control-center:7.6.1 | 9021:9021 (configurable via CONTROL_CENTER_PORT) | Confluent Control Center UI |
In the platform local dev environment this service runs natively (npm run start:dev) against the shared MySQL instance, without spinning up its own Kafka.
CI/CD
- Workflow:
ci.yml - Trigger: push to
develop,staging,main - Steps: SonarQube audit (parallel) + Docker build → push to AWS ECR → kubectl deploy
- Deploy target: AWS EKS (
eu-central-1)develop→helvia-devnamespacestaging→helvia-stgnamespacemain→helvianamespace
- Rollout timeout: 600s
- Build arg:
GITHUB_TOKEN(fromPAT_TOKENsecret) for@helvianpm scope - K8s config:
KUBE_CONFIG_DATA_NEWsecret
Notes
- Kafka is a new infrastructure dependency not present in the platform's shared docker-compose. In the standalone docker-compose it runs in KRaft mode (no ZooKeeper). In production (EKS) it connects to an external Kafka cluster via
KAFKA_BROKERS. - The
node:latestbase image is unpinned — consider pinning to a specific version for reproducible builds. - No explicit Docker
EXPOSEdirective in the Dockerfile; port is driven entirely by thePORTenv var (defaults to 3000 in code).