diff --git a/README.md b/README.md index 35e9a83..9b4d724 100644 --- a/README.md +++ b/README.md @@ -29,3 +29,142 @@ bash ./cks-setup-scripts/setup-cks-latest.sh ``` bash ./cks-setup-scripts/update.sh ``` + +--- + +## KAS Support + +The setup scripts support deploying CKS with integrated Key Access Service (KAS) for Data Security Platform (DSP) integration. KAS enables advanced features like attribute-based access control and integration with Virtru's Data Security Platform. + +### Enabling KAS During Initial Setup + +When running `setup-cks-latest.sh`, you'll be prompted: + +``` +Do you want to enable KAS [yes/no]? +``` + +Answer **yes** to enable KAS. The setup will automatically configure KAS with standard settings: +- OAuth Issuer: `https://login.virtru.com/oauth2/default` +- OAuth Audience: `https://api.virtru.com` +- KAS URI: Same as your CKS URL (derived from SSL certificate) + +KAS will automatically bootstrap itself on startup — it registers with the DSP platform, creates the necessary namespace, attributes, and imports keys. No manual provisioning steps or OAuth client credentials are required. + +### Adding KAS to Existing CKS Deployment + +To add KAS to an existing CKS-only deployment: + +1. Run the update script: + ```bash + bash ./cks-setup-scripts/update.sh + ``` + +2. When prompted, answer **yes** to enable KAS + +3. The script will automatically: + - Create a backup of your existing configuration + - Configure KAS with standard settings (no manual input needed) + - Add KAS environment variables to `env/cks.env` + - Update `run.sh` with KAS-enabled configuration + - Preserve all existing CKS keys and configuration + +4. Apply the changes: + ```bash + docker stop Virtru_CKS + docker rm Virtru_CKS + bash /path/to/working-dir/run.sh + ``` + +**Important:** Migration is safe and preserves your existing CKS keys and configuration. Your CKS data remains accessible after enabling KAS. + +### Architecture + +Both CKS-only and CKS+KAS deployments use the same Docker image: `containers.virtru.com/cks:v{VERSION}` + +KAS is conditionally enabled based on the presence of `KAS_ROOT_KEY` in the environment configuration. If `KAS_ROOT_KEY` is not set, the KAS process remains dormant with no error logs. + +#### CKS-Only Deployment +- **Services:** Orchestrated by supervisord: + - CKS (Node.js application on internal port 3000) + - Caddy (reverse proxy on external port 9000) +- **Port:** External port 443 → Internal port 9000 (Caddy) → Port 3000 (CKS) +- **Database:** None required + +#### CKS+KAS Deployment +- **Services:** Multiple services orchestrated by supervisord: + - PostgreSQL (internal database on port 5432) + - CKS (Node.js application on internal port 3000) + - KAS (Go service on internal port 8080) + - Caddy (reverse proxy on external port 9000) +- **Port:** External port 443 → Internal port 9000 (Caddy) → Port 3000 (CKS) or 8080 (KAS) +- **Database:** PostgreSQL included in container +- **Bootstrap:** KAS automatically registers with DSP, creates namespace/attributes, and imports keys on startup + +#### Traffic Routing + +Caddy reverse proxy routes incoming traffic: +- **CKS Endpoints** → Port 3000 (CKS service) + - `/rewrap` + - `/bulk-rewrap` + - `/public-keys` + - `/key-pairs` + - `/status` + - `/healthz` + - `/docs` +- **All Other Traffic** → Port 8080 (KAS service) + +### Troubleshooting + +#### KAS Not Starting + +**Symptom:** KAS service shows as "sleeping" in logs + +**Solution:** +- Verify `KAS_ROOT_KEY` is set in `env/cks.env` +- Check that all required KAS environment variables are present +- Review logs: `docker logs Virtru_CKS` + +#### Bootstrap Failures + +**Symptom:** Errors in KAS logs during startup + +**Common Causes & Solutions:** +1. **Auth Configuration** + - Verify `KAS_AUTH_ISSUER` matches your OIDC provider + - Check that `KAS_AUTH_AUDIENCE` matches the expected audience + +2. **Key Files Missing** + - Verify `KAS_PUBLIC_KEY_FILE` and `KAS_PRIVATE_KEY_FILE` paths point to existing keys + - Check key file permissions + +#### CKS Endpoints Not Working + +**Symptom:** CKS endpoints return errors or timeout + +**Solution:** +- Review container logs: `docker logs Virtru_CKS` +- Verify SSL certificates are valid and properly mounted + +#### Viewing Service Logs + +```bash +# All logs +docker logs Virtru_CKS + +# Follow logs in real-time +docker logs -f Virtru_CKS + +# View supervisor logs +docker exec Virtru_CKS cat /var/log/supervisor/supervisord.log +``` + +--- + +## Version Compatibility + +Both CKS-only and CKS+KAS deployments use the same Docker image: +- **Image:** `containers.virtru.com/cks:v{VERSION}` +- **Example:** `containers.virtru.com/cks:v1.29.0` + +KAS is conditionally enabled within the same image based on environment configuration. When updating, both deployment types use the same VERSION file and Docker image. diff --git a/setup-cks-latest.sh b/setup-cks-latest.sh index 0eeac66..31caabf 100644 --- a/setup-cks-latest.sh +++ b/setup-cks-latest.sh @@ -21,6 +21,9 @@ HMAC_AUTH_ENABLED=false JWT_AUTH_ENABLED=true JWT_AUTH_AUDIENCE="" +# KAS defaults +KAS_ENABLED=false + # Yes or No Prompt prompt () { while true; do @@ -109,6 +112,19 @@ while [ $l -ne 36 ]; do fi done +printf "\n${GREEN}Key Access Service (KAS) Support${RESET}\n" +printf "KAS enables integration with Virtru's Data Security Platform (DSP).\n" +printf "This allows advanced features like attribute-based access control.\n\n" + +if prompt "Do you want to enable KAS [yes/no]?"; then + KAS_ENABLED=true + + # Set KAS configuration (no prompts needed - use standard values) + KAS_AUTH_ISSUER="https://login.virtru.com/oauth2/default" + KAS_AUTH_AUDIENCE="https://api.virtru.com" + KAS_URI="https://${CKS_FQDN}" +fi + printf "\nRequests from Virtru to your CKS are authenticated with JWTs.\n" printf "Authentication via HMACs may be enabled to support requests from CSE to CKS.\n\n" @@ -164,6 +180,13 @@ else chmod 644 $PUB_KEY_PATH fi +# Generate KAS root key if KAS is enabled +KAS_ROOT_KEY="" +if [ "$KAS_ENABLED" = true ]; then + KAS_ROOT_KEY=$(openssl rand -hex 32) + printf "\n${GREEN}KAS root key generated.${RESET}\n" +fi + SECRET_B64_FINAL="" TOKEN_ID="" TOKEN_JSON="" @@ -191,7 +214,9 @@ fi touch ./env/cks.env # Write the Environment File -printf "PORT=%s\n" $PORT >> ./env/cks.env +# CKS always runs on port 3000 internally (Caddy proxies on 9000 externally) +printf "PORT=3000\n" >> ./env/cks.env + printf "LOG_RSYSLOG_ENABLED=%s\n" $LOG_RSYS_ENABLED >> ./env/cks.env printf "LOG_CONSOLE_ENABLED=%s\n" $LOG_CONSOLE_ENABLED >> ./env/cks.env printf "KEY_PROVIDER_TYPE=%s\n" $KEY_PROVIDER_TYPE >> ./env/cks.env @@ -210,6 +235,47 @@ if [ "$JWT_AUTH_ENABLED" = true ]; then printf "JWT_AUTH_AUDIENCE=%s\n" $JWT_AUTH_AUDIENCE >> ./env/cks.env fi +# Write KAS environment variables if enabled +if [ "$KAS_ENABLED" = true ]; then + # KAS Core Configuration + printf "KAS_ROOT_KEY=%s\n" "$KAS_ROOT_KEY" >> ./env/cks.env + printf "ORG_ID=%s\n" "$JWT_AUTH_AUDIENCE" >> ./env/cks.env + printf "KAS_AUTH_ISSUER=%s\n" "$KAS_AUTH_ISSUER" >> ./env/cks.env + printf "KAS_AUTH_AUDIENCE=%s\n" "$KAS_AUTH_AUDIENCE" >> ./env/cks.env + printf "KAS_URI=%s\n" "$KAS_URI" >> ./env/cks.env + printf "ACM_ENDPOINT=%s\n" "https://api.virtru.com/acm/api" >> ./env/cks.env + printf "SECURE_ENCLAVE_ENDPOINT=%s\n" "https://api.virtru.com/secure-enclave/api" >> ./env/cks.env + printf "WRAPPING_KEY_ID=%s\n" "kas-root-key" >> ./env/cks.env + + # KAS Logging Configuration + printf "KAS_LOG_LEVEL=%s\n" "debug" >> ./env/cks.env + printf "KAS_LOG_TYPE=%s\n" "text" >> ./env/cks.env + printf "KAS_LOG_OUTPUT=%s\n" "stdout" >> ./env/cks.env + + # Database Configuration + printf "DSP_DB_HOST=%s\n" "localhost" >> ./env/cks.env + printf "DSP_DB_PORT=%s\n" "5432" >> ./env/cks.env + printf "DSP_DB_DATABASE=%s\n" "opentdf" >> ./env/cks.env + printf "DSP_DB_USER=%s\n" "postgres" >> ./env/cks.env + printf "DSP_DB_PASSWORD=%s\n" "$(openssl rand -hex 16)" >> ./env/cks.env + printf "DSP_DB_SSLMODE=%s\n" "prefer" >> ./env/cks.env + printf "DSP_DB_SCHEMA=%s\n" "dsp" >> ./env/cks.env + + # Key Configuration + if [ "$KEY_TYPE" = "ECC" ]; then + printf "KEY_ALGORITHM=%s\n" "ec:p256" >> ./env/cks.env + printf "KAS_PUBLIC_KEY_FILE=/app/keys/ecc_p256_001.pub\n" >> ./env/cks.env + printf "KAS_PRIVATE_KEY_FILE=/app/keys/ecc_p256_001.pem\n" >> ./env/cks.env + else + printf "KEY_ALGORITHM=%s\n" "rsa:2048" >> ./env/cks.env + printf "KAS_PUBLIC_KEY_FILE=/app/keys/rsa_001.pub\n" >> ./env/cks.env + printf "KAS_PRIVATE_KEY_FILE=/app/keys/rsa_001.pem\n" >> ./env/cks.env + fi + + # Update JWT_AUTH_ISSUER to match KAS_AUTH_ISSUER for consistency + printf "JWT_AUTH_ISSUER=%s\n" "$KAS_AUTH_ISSUER" >> ./env/cks.env +fi + # Print Summary printf "Summary:\n\n" printf "\tInstallation\n" @@ -225,6 +291,15 @@ printf "\tAuth\n" printf "\tJWT Enabled: %s\n" "$JWT_AUTH_ENABLED" printf "\tHMAC Enabled: %s\n" "$HMAC_AUTH_ENABLED" printf "\tVirtru Org ID: %s\n\n" "$JWT_AUTH_AUDIENCE" + +if [ "$KAS_ENABLED" = true ]; then + printf "\t${GREEN}KAS Configuration${RESET}\n" + printf "\tKAS Enabled: true\n" + printf "\tKAS Auth Issuer: %s\n" "$KAS_AUTH_ISSUER" + printf "\tKAS Auth Audience: %s\n" "$KAS_AUTH_AUDIENCE" + printf "\tKAS URI: %s\n\n" "$KAS_URI" +fi + printf "\tTroubleshooting\n" printf "\tSupport URL: %s\n" $SUPPORT_URL printf "\tSupport Email: %s\n" $SUPPORT_EMAIL @@ -251,5 +326,8 @@ rm -rf ./cks_info # Create the Run File touch ./run.sh +chmod +x ./run.sh + +# Generate run.sh (always uses port 9000 via Caddy, no "serve" arg - supervisord manages processes) +echo "docker run --name Virtru_CKS --interactive --tty --detach --restart unless-stopped --env-file "$WORKING_DIR"/env/cks.env -p 443:9000 --mount type=bind,source="$WORKING_DIR"/keys,target="$KEY_PROVIDER_PATH" --mount type=bind,source="$WORKING_DIR"/ssl,target=/app/ssl containers.virtru.com/cks:v"$CKS_VERSION"" > ./run.sh -echo "docker run --name Virtru_CKS --interactive --tty --detach --restart unless-stopped --env-file "$WORKING_DIR"/env/cks.env -p 443:$PORT --mount type=bind,source="$WORKING_DIR"/keys,target="$KEY_PROVIDER_PATH" --mount type=bind,source="$WORKING_DIR"/ssl,target=/app/ssl containers.virtru.com/cks:v"$CKS_VERSION" serve" > ./run.sh diff --git a/update.sh b/update.sh index b3a9e64..87994f5 100644 --- a/update.sh +++ b/update.sh @@ -72,6 +72,13 @@ if ! [ -d "$WORKING_DIR" ]; then exit fi +# Detect if KAS is already enabled +KAS_ENABLED=false +if grep -q "KAS_ROOT_KEY" "$WORKING_DIR"/env/cks.env 2>/dev/null; then + KAS_ENABLED=true + printf "Detected existing KAS configuration.\n\n" +fi + # Upgrades if envVariableNotSet "JWT_AUTH_ENABLED"; then printf "Virtru supports authentication to your CKS via JWTs.\n" @@ -85,12 +92,115 @@ if envVariableNotSet "JWT_AUTH_ENABLED"; then fi fi +# Offer to enable KAS for CKS-only deployments +if [ "$KAS_ENABLED" = false ]; then + printf "\n${GREEN}Key Access Service (KAS)${RESET}\n" + printf "KAS is available for this CKS deployment.\n" + printf "KAS enables integration with Virtru's Data Security Platform.\n\n" + + if prompt "Do you want to enable KAS [yes/no]?"; then + # Create backup before migration + printf "Creating backup of current configuration...\n" + cp "$WORKING_DIR"/env/cks.env "$WORKING_DIR"/env/cks.env.backup.$(date +%Y%m%d_%H%M%S) + printf "Backup created.\n\n" + + KAS_ENABLED=true + + # Set KAS configuration (no prompts needed - use standard values) + KAS_AUTH_ISSUER="https://login.virtru.com/oauth2/default" + KAS_AUTH_AUDIENCE="https://api.virtru.com" + + # Get CKS FQDN from existing SSL certificate for KAS_URI + CKS_FQDN=$(find "$WORKING_DIR"/ssl/ -name "*.crt" -not -name "ssl.pem" 2>/dev/null | head -1 | xargs basename -s .crt 2>/dev/null) + if [ -z "$CKS_FQDN" ]; then + CKS_FQDN="localhost" + fi + KAS_URI="https://${CKS_FQDN}" + + # Generate KAS_ROOT_KEY + KAS_ROOT_KEY=$(openssl rand -hex 32) + + # Determine key type from existing keys + if ls "$WORKING_DIR"/keys/ecc_*.pem 1>/dev/null 2>&1; then + KEY_TYPE="ECC" + KEY_ALGORITHM="ec:p256" + KEY_PUBLIC_FILE="/app/keys/ecc_p256_001.pub" + KEY_PRIVATE_FILE="/app/keys/ecc_p256_001.pem" + else + KEY_TYPE="RSA" + KEY_ALGORITHM="rsa:2048" + KEY_PUBLIC_FILE="/app/keys/rsa_001.pub" + KEY_PRIVATE_FILE="/app/keys/rsa_001.pem" + fi + + # Get existing Org ID from JWT_AUTH_AUDIENCE + EXISTING_ORG_ID=$(cat "$WORKING_DIR"/env/cks.env | grep JWT_AUTH_AUDIENCE | cut -d "=" -f2) + + # Add KAS environment variables + updateEnvVariable "KAS_ROOT_KEY" "$KAS_ROOT_KEY" + updateEnvVariable "ORG_ID" "$EXISTING_ORG_ID" + updateEnvVariable "KAS_AUTH_ISSUER" "$KAS_AUTH_ISSUER" + updateEnvVariable "KAS_AUTH_AUDIENCE" "$KAS_AUTH_AUDIENCE" + updateEnvVariable "KAS_URI" "$KAS_URI" + updateEnvVariable "ACM_ENDPOINT" "https://api.virtru.com/acm/api" + updateEnvVariable "SECURE_ENCLAVE_ENDPOINT" "https://api.virtru.com/secure-enclave/api" + updateEnvVariable "WRAPPING_KEY_ID" "kas-root-key" + + # KAS Logging + updateEnvVariable "KAS_LOG_LEVEL" "debug" + updateEnvVariable "KAS_LOG_TYPE" "text" + updateEnvVariable "KAS_LOG_OUTPUT" "stdout" + + # Database configuration + updateEnvVariable "DSP_DB_HOST" "localhost" + updateEnvVariable "DSP_DB_PORT" "5432" + updateEnvVariable "DSP_DB_DATABASE" "opentdf" + updateEnvVariable "DSP_DB_USER" "postgres" + updateEnvVariable "DSP_DB_PASSWORD" "$(openssl rand -hex 16)" + updateEnvVariable "DSP_DB_SSLMODE" "prefer" + updateEnvVariable "DSP_DB_SCHEMA" "dsp" + + updateEnvVariable "KEY_ALGORITHM" "$KEY_ALGORITHM" + updateEnvVariable "KAS_PUBLIC_KEY_FILE" "$KEY_PUBLIC_FILE" + updateEnvVariable "KAS_PRIVATE_KEY_FILE" "$KEY_PRIVATE_FILE" + + # CKS always runs on internal port 3000 (Caddy exposes 9000) + updateEnvVariable "PORT" "3000" + updateEnvVariable "JWT_AUTH_ISSUER" "$KAS_AUTH_ISSUER" + + printf "\n${GREEN}KAS configuration added successfully.${RESET}\n\n" + fi +fi + KEY_PROVIDER_TYPE=$(cat "$WORKING_DIR"/env/cks.env | grep KEY_PROVIDER_TYPE | cut -d "=" -f2) +# Generate Docker run command (always uses port 9000 via Caddy, no "serve" arg) +DOCKER_IMAGE="containers.virtru.com/cks:v$CKS_VERSION" +CONTAINER_NAME="Virtru_CKS" +OLD_CONTAINER_NAME="Virtru_CKS" +EXTERNAL_PORT=9000 # Caddy always exposes port 9000 + if [ "$KEY_PROVIDER_TYPE" = "hsm" ]; then - echo "docker run --name Virtru_CKS --interactive --tty --detach --env-file "$WORKING_DIR"/env/cks.env -p 443:$PORT --mount type=bind,source="$WORKING_DIR"/keys,target="$KEY_PROVIDER_PATH" --mount type=bind,source="$WORKING_DIR"/ssl,target=/app/ssl --mount type=bind,source="$WORKING_DIR"/hsm-config/customerCA.crt,target=/opt/cloudhsm/etc/customerCA.crt virtru/cks:v"$CKS_VERSION" serve" > "$WORKING_DIR/run.sh" + echo "docker run --name $CONTAINER_NAME --interactive --tty --detach --restart unless-stopped --env-file "$WORKING_DIR"/env/cks.env -p 443:$EXTERNAL_PORT --mount type=bind,source="$WORKING_DIR"/keys,target="$KEY_PROVIDER_PATH" --mount type=bind,source="$WORKING_DIR"/ssl,target=/app/ssl --mount type=bind,source="$WORKING_DIR"/hsm-config/customerCA.crt,target=/opt/cloudhsm/etc/customerCA.crt $DOCKER_IMAGE" > "$WORKING_DIR/run.sh" else - echo "docker run --name Virtru_CKS --interactive --tty --detach --env-file "$WORKING_DIR"/env/cks.env -p 443:$PORT --mount type=bind,source="$WORKING_DIR"/keys,target="$KEY_PROVIDER_PATH" --mount type=bind,source="$WORKING_DIR"/ssl,target=/app/ssl virtru/cks:v"$CKS_VERSION" serve" > ./run.sh + echo "docker run --name $CONTAINER_NAME --interactive --tty --detach --restart unless-stopped --env-file "$WORKING_DIR"/env/cks.env -p 443:$EXTERNAL_PORT --mount type=bind,source="$WORKING_DIR"/keys,target="$KEY_PROVIDER_PATH" --mount type=bind,source="$WORKING_DIR"/ssl,target=/app/ssl $DOCKER_IMAGE" > "$WORKING_DIR/run.sh" fi -printf "\nUpdated! Run the CKS with bash $WORKING_DIR/run.sh\n" +chmod +x "$WORKING_DIR/run.sh" + +# Provide clear instructions based on deployment type +printf "\n${GREEN}Configuration updated!${RESET}\n\n" + +if [ "$KAS_ENABLED" = true ]; then + printf "Deployment type: ${BOLD}CKS with KAS${RESET}\n" +else + printf "Deployment type: ${BOLD}CKS Only${RESET}\n" +fi +printf "Docker image: $DOCKER_IMAGE\n\n" + +printf "To apply the changes:\n" +printf " 1. Stop the current container: ${BOLD}docker stop $CONTAINER_NAME${RESET}\n" +printf " 2. Remove the old container: ${BOLD}docker rm $CONTAINER_NAME${RESET}\n" +printf " 3. Start the new container: ${BOLD}bash $WORKING_DIR/run.sh${RESET}\n" +printf " 4. Monitor logs: ${BOLD}docker logs -f $CONTAINER_NAME${RESET}\n\n" +