fix: Change health check from curl to wget for alpine compatibility

- Backend container uses node:20-alpine which doesn't include curl
- wget is available by default in alpine
- Health check endpoint is working, just the command was wrong
- Also added Sprint 1 documentation and credentials file
This commit is contained in:
fullsizemalt 2025-12-09 13:39:43 -08:00
parent b370a2e151
commit d2c3e67e40
3 changed files with 267 additions and 1 deletions

155
CREDENTIALS.md Normal file
View file

@ -0,0 +1,155 @@
# 🔐 CA Grow Ops Manager - Login Credentials
**Environment**: Production
**URL**: <https://777wolfpack.runfoo.run>
**Last Updated**: 2025-12-09
---
## Default User Accounts
### Owner Account
- **Email**: `admin@runfoo.com`
- **Password**: `password123`
- **Role**: OWNER
- **Permissions**: Full access to all features
- **Hourly Rate**: $50.00
---
## Seeded Data
### Rooms
The following rooms are pre-configured:
1. **Veg Room 1** - VEG type, 1200 sqft
2. **Flower Room A** - FLOWER type, 2500 sqft
3. **Flower Room B** - FLOWER type, 2500 sqft
4. **Dry Room** - DRY type, 800 sqft
---
## Testing the Application
### 1. Login
1. Navigate to <https://777wolfpack.runfoo.run>
2. Enter email: `admin@runfoo.com`
3. Enter password: `password123`
4. Click "Login"
### 2. Available Features (Currently Implemented)
- ✅ **Dashboard**: Overview of facility operations
- ✅ **Rooms**: View and manage grow rooms
- ✅ **Batches**: View and manage cultivation batches
- ✅ **Timeclock**: Clock in/out for labor tracking
### 3. Features In Development
- ⏳ **Tasks**: Task management and scheduling
- ⏳ **Compliance**: Document storage and audit packets
- ⏳ **Settings**: User preferences and management
- ⏳ **Reports**: Labor analytics and cost tracking
---
## API Access
### Base URL
```
https://777wolfpack.runfoo.run/api
```
### Authentication
```bash
# Login to get JWT token
curl -X POST https://777wolfpack.runfoo.run/api/auth/login \
-H "Content-Type: application/json" \
-d '{
"email": "admin@runfoo.com",
"password": "password123"
}'
# Response includes:
# - accessToken (use in Authorization header)
# - refreshToken (for token renewal)
```
### Example API Calls
```bash
# Get all rooms (requires auth token)
curl https://777wolfpack.runfoo.run/api/rooms \
-H "Authorization: Bearer YOUR_ACCESS_TOKEN"
# Get all batches
curl https://777wolfpack.runfoo.run/api/batches \
-H "Authorization: Bearer YOUR_ACCESS_TOKEN"
```
---
## Security Notes
⚠️ **IMPORTANT**: These are development/demo credentials.
**Before production use**:
1. Change all default passwords
2. Implement proper password hashing (currently using plaintext in seed)
3. Enable 2FA for owner accounts
4. Rotate JWT secrets
5. Implement password complexity requirements
---
## Troubleshooting
### "Invalid credentials" error
- Verify you're using the exact email: `admin@runfoo.com`
- Password is case-sensitive: `password123`
- Check if database has been seeded (see deployment logs)
### "Cannot connect to server"
- Verify backend is healthy: `docker compose ps`
- Check backend logs: `docker compose logs backend`
- Verify Traefik routing is working
### Need to re-seed database
```bash
ssh admin@nexus-vector
cd /srv/containers/ca-grow-ops-manager
docker compose exec backend npx prisma db seed
```
---
## Adding More Users
Currently, user registration is only available through the seed script or direct database access.
**Coming in Sprint 2**: Admin user management UI
**Temporary workaround** (SSH to nexus-vector):
```bash
docker compose exec backend npx prisma studio
# Opens Prisma Studio on localhost:5555
# Add users manually through the UI
```
---
**Need help?** Check the logs:
```bash
ssh admin@nexus-vector "cd /srv/containers/ca-grow-ops-manager && docker compose logs -f"
```

View file

@ -50,7 +50,7 @@ services:
networks:
- internal
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:3000/api/healthz" ]
test: [ "CMD", "wget", "-q", "-O-", "http://localhost:3000/api/healthz" ]
interval: 30s
timeout: 10s
retries: 3

View file

@ -0,0 +1,111 @@
# Sprint 1: Fix Backend Health Check
**Date**: 2025-12-09
**Status**: In Progress
**Duration**: 30 minutes
**Priority**: 🔴 Critical
---
## 🎯 Objective
Fix the unhealthy backend container by resolving the Docker health check issue.
---
## 🔍 Problem Diagnosis
### Current State
```
ca-grow-ops-manager-backend-1 Up 41 minutes (unhealthy)
```
### Root Cause
The health check in `docker-compose.yml` uses `curl`:
```yaml
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:3000/api/healthz" ]
```
However, the backend container (node:20-alpine) **does not have curl installed**.
### Evidence
- Backend logs show server is running: `Server listening at http://0.0.0.0:3000`
- Backend is successfully serving requests (login, rooms endpoints working)
- Health check endpoint exists at `/api/healthz`
---
## ✅ Solution
### Option 1: Use wget (already available in alpine)
Change health check to use `wget` instead of `curl`.
### Option 2: Install curl in Dockerfile
Add `curl` to the alpine packages.
**Chosen**: Option 1 (wget) - simpler, no rebuild needed for testing.
---
## 📋 Implementation Steps
1. ✅ Diagnose issue (check logs, verify endpoint exists)
2. ⏳ Update `docker-compose.yml` health check to use `wget`
3. ⏳ Commit and push changes
4. ⏳ Deploy to nexus-vector
5. ⏳ Verify health check passes
---
## 🧪 Testing
### Manual Test
```bash
# SSH to nexus-vector
ssh admin@nexus-vector
# Test wget works in container
docker exec ca-grow-ops-manager-backend-1 wget -q -O- http://localhost:3000/api/healthz
# Expected output:
# {"status":"ok","timestamp":"2025-12-09T..."}
```
### Verify Health
```bash
docker compose ps
# All services should show (healthy)
```
---
## 📊 Success Criteria
- [ ] Backend container shows `(healthy)` status
- [ ] Health check endpoint returns 200 OK
- [ ] No errors in backend logs
- [ ] Application remains accessible at <https://777wolfpack.runfoo.run>
---
## 🔗 Related Files
- `docker-compose.yml` (line 53)
- `backend/src/server.ts` (line 28-30)
---
## 📝 Notes
- Backend is actually working fine, just the health check command is wrong
- This is a non-breaking fix (won't affect running services)
- After fix, Docker will correctly report container health