Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
139 changes: 139 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,3 +29,142 @@ bash ./cks-setup-scripts/setup-cks-latest.sh
```
bash ./cks-setup-scripts/update.sh
```

---

## KAS Support

The setup scripts support deploying CKS with integrated Key Access Service (KAS) for Data Security Platform (DSP) integration. KAS enables advanced features like attribute-based access control and integration with Virtru's Data Security Platform.

### Enabling KAS During Initial Setup

When running `setup-cks-latest.sh`, you'll be prompted:

```
Do you want to enable KAS [yes/no]?
```

Answer **yes** to enable KAS. The setup will automatically configure KAS with standard settings:
- OAuth Issuer: `https://login.virtru.com/oauth2/default`
- OAuth Audience: `https://api.virtru.com`
- KAS URI: Same as your CKS URL (derived from SSL certificate)

KAS will automatically bootstrap itself on startup — it registers with the DSP platform, creates the necessary namespace, attributes, and imports keys. No manual provisioning steps or OAuth client credentials are required.

### Adding KAS to Existing CKS Deployment

To add KAS to an existing CKS-only deployment:

1. Run the update script:
```bash
bash ./cks-setup-scripts/update.sh
```

2. When prompted, answer **yes** to enable KAS

3. The script will automatically:
- Create a backup of your existing configuration
- Configure KAS with standard settings (no manual input needed)
- Add KAS environment variables to `env/cks.env`
- Update `run.sh` with KAS-enabled configuration
- Preserve all existing CKS keys and configuration

4. Apply the changes:
```bash
docker stop Virtru_CKS
docker rm Virtru_CKS
bash /path/to/working-dir/run.sh
```

**Important:** Migration is safe and preserves your existing CKS keys and configuration. Your CKS data remains accessible after enabling KAS.

### Architecture

Both CKS-only and CKS+KAS deployments use the same Docker image: `containers.virtru.com/cks:v{VERSION}`

KAS is conditionally enabled based on the presence of `KAS_ROOT_KEY` in the environment configuration. If `KAS_ROOT_KEY` is not set, the KAS process remains dormant with no error logs.

#### CKS-Only Deployment
- **Services:** Orchestrated by supervisord:
- CKS (Node.js application on internal port 3000)
- Caddy (reverse proxy on external port 9000)
- **Port:** External port 443 → Internal port 9000 (Caddy) → Port 3000 (CKS)
- **Database:** None required

#### CKS+KAS Deployment
- **Services:** Multiple services orchestrated by supervisord:
- PostgreSQL (internal database on port 5432)
- CKS (Node.js application on internal port 3000)
- KAS (Go service on internal port 8080)
- Caddy (reverse proxy on external port 9000)
- **Port:** External port 443 → Internal port 9000 (Caddy) → Port 3000 (CKS) or 8080 (KAS)
- **Database:** PostgreSQL included in container
- **Bootstrap:** KAS automatically registers with DSP, creates namespace/attributes, and imports keys on startup

#### Traffic Routing

Caddy reverse proxy routes incoming traffic:
- **CKS Endpoints** → Port 3000 (CKS service)
- `/rewrap`
- `/bulk-rewrap`
- `/public-keys`
- `/key-pairs`
- `/status`
- `/healthz`
- `/docs`
- **All Other Traffic** → Port 8080 (KAS service)

### Troubleshooting

#### KAS Not Starting

**Symptom:** KAS service shows as "sleeping" in logs

**Solution:**
- Verify `KAS_ROOT_KEY` is set in `env/cks.env`
- Check that all required KAS environment variables are present
- Review logs: `docker logs Virtru_CKS`

#### Bootstrap Failures

**Symptom:** Errors in KAS logs during startup

**Common Causes & Solutions:**
1. **Auth Configuration**
- Verify `KAS_AUTH_ISSUER` matches your OIDC provider
- Check that `KAS_AUTH_AUDIENCE` matches the expected audience

2. **Key Files Missing**
- Verify `KAS_PUBLIC_KEY_FILE` and `KAS_PRIVATE_KEY_FILE` paths point to existing keys
- Check key file permissions

#### CKS Endpoints Not Working

**Symptom:** CKS endpoints return errors or timeout

**Solution:**
- Review container logs: `docker logs Virtru_CKS`
- Verify SSL certificates are valid and properly mounted

#### Viewing Service Logs

```bash
# All logs
docker logs Virtru_CKS

# Follow logs in real-time
docker logs -f Virtru_CKS

# View supervisor logs
docker exec Virtru_CKS cat /var/log/supervisor/supervisord.log
```

---

## Version Compatibility

Both CKS-only and CKS+KAS deployments use the same Docker image:
- **Image:** `containers.virtru.com/cks:v{VERSION}`
- **Example:** `containers.virtru.com/cks:v1.29.0`

KAS is conditionally enabled within the same image based on environment configuration. When updating, both deployment types use the same VERSION file and Docker image.
82 changes: 80 additions & 2 deletions setup-cks-latest.sh
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,9 @@ HMAC_AUTH_ENABLED=false
JWT_AUTH_ENABLED=true
JWT_AUTH_AUDIENCE=""

# KAS defaults
KAS_ENABLED=false

# Yes or No Prompt
prompt () {
while true; do
Expand Down Expand Up @@ -109,6 +112,19 @@ while [ $l -ne 36 ]; do
fi
done

printf "\n${GREEN}Key Access Service (KAS) Support${RESET}\n"
printf "KAS enables integration with Virtru's Data Security Platform (DSP).\n"
printf "This allows advanced features like attribute-based access control.\n\n"

if prompt "Do you want to enable KAS [yes/no]?"; then
KAS_ENABLED=true

# Set KAS configuration (no prompts needed - use standard values)
KAS_AUTH_ISSUER="https://login.virtru.com/oauth2/default"
KAS_AUTH_AUDIENCE="https://api.virtru.com"
KAS_URI="https://${CKS_FQDN}"
fi

printf "\nRequests from Virtru to your CKS are authenticated with JWTs.\n"
printf "Authentication via HMACs may be enabled to support requests from CSE to CKS.\n\n"

Expand Down Expand Up @@ -164,6 +180,13 @@ else
chmod 644 $PUB_KEY_PATH
fi

# Generate KAS root key if KAS is enabled
KAS_ROOT_KEY=""
if [ "$KAS_ENABLED" = true ]; then
KAS_ROOT_KEY=$(openssl rand -hex 32)
printf "\n${GREEN}KAS root key generated.${RESET}\n"
fi

SECRET_B64_FINAL=""
TOKEN_ID=""
TOKEN_JSON=""
Expand Down Expand Up @@ -191,7 +214,9 @@ fi
touch ./env/cks.env

# Write the Environment File
printf "PORT=%s\n" $PORT >> ./env/cks.env
# CKS always runs on port 3000 internally (Caddy proxies on 9000 externally)
printf "PORT=3000\n" >> ./env/cks.env

printf "LOG_RSYSLOG_ENABLED=%s\n" $LOG_RSYS_ENABLED >> ./env/cks.env
printf "LOG_CONSOLE_ENABLED=%s\n" $LOG_CONSOLE_ENABLED >> ./env/cks.env
printf "KEY_PROVIDER_TYPE=%s\n" $KEY_PROVIDER_TYPE >> ./env/cks.env
Expand All @@ -210,6 +235,47 @@ if [ "$JWT_AUTH_ENABLED" = true ]; then
printf "JWT_AUTH_AUDIENCE=%s\n" $JWT_AUTH_AUDIENCE >> ./env/cks.env
fi

# Write KAS environment variables if enabled
if [ "$KAS_ENABLED" = true ]; then
# KAS Core Configuration
printf "KAS_ROOT_KEY=%s\n" "$KAS_ROOT_KEY" >> ./env/cks.env
printf "ORG_ID=%s\n" "$JWT_AUTH_AUDIENCE" >> ./env/cks.env
printf "KAS_AUTH_ISSUER=%s\n" "$KAS_AUTH_ISSUER" >> ./env/cks.env
printf "KAS_AUTH_AUDIENCE=%s\n" "$KAS_AUTH_AUDIENCE" >> ./env/cks.env
printf "KAS_URI=%s\n" "$KAS_URI" >> ./env/cks.env
printf "ACM_ENDPOINT=%s\n" "https://api.virtru.com/acm/api" >> ./env/cks.env
printf "SECURE_ENCLAVE_ENDPOINT=%s\n" "https://api.virtru.com/secure-enclave/api" >> ./env/cks.env
printf "WRAPPING_KEY_ID=%s\n" "kas-root-key" >> ./env/cks.env

# KAS Logging Configuration
printf "KAS_LOG_LEVEL=%s\n" "debug" >> ./env/cks.env
printf "KAS_LOG_TYPE=%s\n" "text" >> ./env/cks.env
printf "KAS_LOG_OUTPUT=%s\n" "stdout" >> ./env/cks.env

# Database Configuration
printf "DSP_DB_HOST=%s\n" "localhost" >> ./env/cks.env
printf "DSP_DB_PORT=%s\n" "5432" >> ./env/cks.env
printf "DSP_DB_DATABASE=%s\n" "opentdf" >> ./env/cks.env
printf "DSP_DB_USER=%s\n" "postgres" >> ./env/cks.env
printf "DSP_DB_PASSWORD=%s\n" "$(openssl rand -hex 16)" >> ./env/cks.env
printf "DSP_DB_SSLMODE=%s\n" "prefer" >> ./env/cks.env
printf "DSP_DB_SCHEMA=%s\n" "dsp" >> ./env/cks.env

# Key Configuration
if [ "$KEY_TYPE" = "ECC" ]; then
printf "KEY_ALGORITHM=%s\n" "ec:p256" >> ./env/cks.env
printf "KAS_PUBLIC_KEY_FILE=/app/keys/ecc_p256_001.pub\n" >> ./env/cks.env
printf "KAS_PRIVATE_KEY_FILE=/app/keys/ecc_p256_001.pem\n" >> ./env/cks.env
else
printf "KEY_ALGORITHM=%s\n" "rsa:2048" >> ./env/cks.env
printf "KAS_PUBLIC_KEY_FILE=/app/keys/rsa_001.pub\n" >> ./env/cks.env
printf "KAS_PRIVATE_KEY_FILE=/app/keys/rsa_001.pem\n" >> ./env/cks.env
fi

# Update JWT_AUTH_ISSUER to match KAS_AUTH_ISSUER for consistency
printf "JWT_AUTH_ISSUER=%s\n" "$KAS_AUTH_ISSUER" >> ./env/cks.env
fi

# Print Summary
printf "Summary:\n\n"
printf "\tInstallation\n"
Expand All @@ -225,6 +291,15 @@ printf "\tAuth\n"
printf "\tJWT Enabled: %s\n" "$JWT_AUTH_ENABLED"
printf "\tHMAC Enabled: %s\n" "$HMAC_AUTH_ENABLED"
printf "\tVirtru Org ID: %s\n\n" "$JWT_AUTH_AUDIENCE"

if [ "$KAS_ENABLED" = true ]; then
printf "\t${GREEN}KAS Configuration${RESET}\n"
printf "\tKAS Enabled: true\n"
printf "\tKAS Auth Issuer: %s\n" "$KAS_AUTH_ISSUER"
printf "\tKAS Auth Audience: %s\n" "$KAS_AUTH_AUDIENCE"
printf "\tKAS URI: %s\n\n" "$KAS_URI"
fi

printf "\tTroubleshooting\n"
printf "\tSupport URL: %s\n" $SUPPORT_URL
printf "\tSupport Email: %s\n" $SUPPORT_EMAIL
Expand All @@ -251,5 +326,8 @@ rm -rf ./cks_info

# Create the Run File
touch ./run.sh
chmod +x ./run.sh

# Generate run.sh (always uses port 9000 via Caddy, no "serve" arg - supervisord manages processes)
echo "docker run --name Virtru_CKS --interactive --tty --detach --restart unless-stopped --env-file "$WORKING_DIR"/env/cks.env -p 443:9000 --mount type=bind,source="$WORKING_DIR"/keys,target="$KEY_PROVIDER_PATH" --mount type=bind,source="$WORKING_DIR"/ssl,target=/app/ssl containers.virtru.com/cks:v"$CKS_VERSION"" > ./run.sh

echo "docker run --name Virtru_CKS --interactive --tty --detach --restart unless-stopped --env-file "$WORKING_DIR"/env/cks.env -p 443:$PORT --mount type=bind,source="$WORKING_DIR"/keys,target="$KEY_PROVIDER_PATH" --mount type=bind,source="$WORKING_DIR"/ssl,target=/app/ssl containers.virtru.com/cks:v"$CKS_VERSION" serve" > ./run.sh
Loading