A NetIQ Identity Manager driver that captures events from the subscriber channel and logs them to a PostgreSQL database, along with a Flask web UI for browsing and forensic analysis. While it may be useful, this is not a comprehensive auditing solution. It is very useful during driver development as it can capture sample events to be used for testing.
The Event Logger driver sits on the subscriber channel of an Identity Manager driver set. Every event that passes through (add, modify, delete, sync, rename, move) is converted to JSON and written to a PostgreSQL table alongside the original XML. This gives you a searchable, queryable audit trail of all identity events.
The companion web UI provides a forensic investigation interface: search for any object by DN and see a complete timeline of every event that affected it, with diff views for modify events showing exactly what changed.
src/com/pointblue/idm/eventlogger/
EventLoggerDriver.java Main driver class (DriverShim, SubscriptionShim, PublicationShim)
CommonImpl.java Base class with XDS document utilities
PolicyLogger.java Standalone logger for policy input/output documents
xds2json/
BaseEventConverter.java Abstract base for all XML-to-JSON converters
AddEventConverter.java Handles <add> events
ModifyEventConverter.java Handles <modify> events
DeleteEventConverter.java Handles <delete> events
SyncEventConverter.java Handles <sync> events
RenameEventConverter.java Handles <rename> events
MoveEventConverter.java Handles <move> events
JsonToXmlConverter.java Reverse converter (JSON back to XML)
offline/ Test harnesses for offline development
web/
app.py Flask web application
requirements.txt Python dependencies
templates/ Jinja2 templates
sql/
CREATE jsonEvent.sql Table and index DDL
*.sql Example queries
EventLogger.xml Designer driver export for import
CREATE DATABASE "idmEvent";Connect to the idmEvent database and run the DDL script:
psql -h localhost -U postgres -d idmEvent -f sql/CREATE\ jsonEvent.sqlThis creates:
| Column | Type | Description |
|---|---|---|
eventid |
varchar PK |
DirXML event ID (e.g. 1714143050#2) |
classname |
varchar |
Object class (e.g. User, Group) |
srcdn |
varchar |
Source DN of the affected object |
srcentryid |
varchar |
Source entry GUID |
eventtype |
varchar |
Event type: add, modify, delete, sync, rename, move |
eventjson |
jsonb |
Full event converted to JSON |
xmlevent |
text |
Original XDS XML document (optional, controlled by storeXML) |
cachedtime |
timestamptz |
Event timestamp |
srcdriver |
varchar |
DN of the source driver that logged the event |
An index on REVERSE(srcdn) is created to support efficient subtree queries using reverse pattern matching. An index on srcdriver supports filtering events by source driver.
The web UI should connect with a read-only database account to prevent accidental data modification:
CREATE USER eventlogger_reader WITH PASSWORD 'your_reader_password';
GRANT CONNECT ON DATABASE "idmEvent" TO eventlogger_reader;
GRANT USAGE ON SCHEMA public TO eventlogger_reader;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO eventlogger_reader;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO eventlogger_reader;- Compile the Java source files against the Identity Manager driver SDK JARs (dirxml_misc.jar, nxsl.jar, etc.) and the PostgreSQL JDBC driver (
lib/postgresql-42.7.7.jar). - Package the compiled classes into
DIrXMLEventLogger.jar. - Deploy the JAR to the Identity Manager server's driver classpath (typically
/opt/novell/eDirectory/lib/dirxml/classes/and restart eDirectory - Deploy
lib/postgresql-42.7.7.jarto the same classpath location if not already present.
- Open NetIQ Identity Manager Designer.
- Right-click on the driver set where you want to add the Event Logger.
- Select Import and choose
EventLogger.xmlfrom the project root. - This creates a pre-configured driver object with the correct Java class name and default settings.
- Modify the driver filter to include the attributes ( or classes) you need.
The driver uses the standard Identity Manager authentication fields:
| Field | Purpose | Example |
|---|---|---|
| Authentication ID | PostgreSQL username | postgres |
| Authentication Context | PostgreSQL host:port/database | localhost:5432/idmEvent |
| Application Password | PostgreSQL password | (your password) |
| Option | Type | Default | Description |
|---|---|---|---|
storeXML |
string | true |
Set to false to skip storing the raw XML document (saves disk space) |
tableName |
string | public.dxmlevent |
Override the target table name |
- The driver's
init()method reads the authentication and option parameters, then validates the database connection. If the database is unreachable, the driver returns a fatal status and will not start. - On each subscriber channel event,
execute()is called with the XDS document. - The XML is parsed to determine the event type, then converted to JSON by the appropriate converter.
- The JSON and (optionally) raw XML are inserted into PostgreSQL via a reusable JDBC connection.
- If the database becomes unavailable, the driver applies exponential backoff (1s, 2s, 4s, ... up to 5 minutes) before retrying, and returns
STATUS_RETRYso the engine queues the event for redelivery.
| SQL State | Behavior |
|---|---|
23505 (duplicate key) |
Returns error, event is skipped (already logged) |
42P01, 42703 (undefined table/column) |
Returns fatal, requires admin fix |
28000 (invalid credentials) |
Returns fatal |
08xxx (connection errors) |
Resets connection, applies backoff, retries |
| Other | Retries |
The web UI is a Flask application for browsing and searching the event database.
The easiest way to run the web UI, especially for non-developers, is with Docker.
- Install Docker Desktop.
- Copy the example environment file and fill in your database credentials:
cd web
cp .env.example .env- Edit
.envwith your database connection details:
DB_HOST=10.0.0.5
DB_PORT=5432
DB_NAME=idmEvent
DB_USER=eventlogger_reader
DB_PASSWORD=your_reader_password
TABLE_NAME=public.dxmlevent
- Start the application:
docker compose up- Open http://localhost:5000.
To stop: docker compose down. To rebuild after updates: docker compose build && docker compose up.
If you prefer to run without Docker:
cd web
python3 -m pip install -r requirements.txtStart the application with your database credentials:
DB_HOST=localhost \
DB_PORT=5432 \
DB_NAME=idmEvent \
DB_USER=eventlogger_reader \
DB_PASSWORD=your_reader_password \
python3 app.pyThen open http://localhost:5000.
Important: Use the read-only database account (eventlogger_reader) for the web UI, not the driver's write account. The UI only needs SELECT access and should not have the ability to modify event data.
| Variable | Default | Description |
|---|---|---|
DB_HOST |
localhost |
PostgreSQL host |
DB_PORT |
5432 |
PostgreSQL port |
DB_NAME |
idmEvent |
Database name |
DB_USER |
postgres |
Database user |
DB_PASSWORD |
(empty) | Database password |
TABLE_NAME |
public.dxmlevent |
Table name (must match driver config) |
| Page | URL | Description |
|---|---|---|
| Home | / |
DN autocomplete search to find objects |
| Timeline | /timeline?srcdn=... |
Chronological event history for an object, filterable by event type, class name, and date range |
| Event Detail | /event?id=... |
Full JSON and XML view for a single event, with modify diff table showing old/new values and prev/next navigation |
| Recent | /recent |
Most recent events across all objects (default 100), filterable by type and driver |
| Search | /search |
Full-text search across all event JSON payloads with filters |
| Dashboard | /stats |
Event counts by type and class, most active objects, 30-day activity chart |
| CSV Export | /export/timeline?srcdn=... |
Download an object's complete event history as CSV |
"What happened to this user?" — Go to Home, type part of the DN, select it, view the full timeline. Filter by date range to narrow down an incident window.
"What changed on this date?" — Use Search with a date range filter. Click any result to see full detail, or click the DN to see that object's complete history.
"What attributes were modified?" — Click any modify event in a timeline. The Event Detail page shows a diff table with the attribute name, old value, and new value.
"Find all events touching a specific value" — Use Search to query across all JSON payloads. For example, search for an email address to find every event that set or removed it.
The Event Logger driver only captures events on its own subscriber channel. If you want to log events from other drivers — for example, to capture what an AD driver or SAP driver is processing — you can use PolicyLogger from an ECMAScript policy on those drivers.
When the EventLoggerDriver starts, it automatically registers itself with the PolicyLogger static registry. Policies on any other driver in the same JVM can then call PolicyLogger.logEvent() to send their current XDS document to the Event Logger's database — no database credentials needed in the policy code.
- Deploy
DIrXMLEventLogger.jarand the PostgreSQL JDBC driver to the Identity Manager classpath (see Building the JAR). - Start the EventLoggerDriver. It registers itself automatically.
- Add an ECMAScript policy action to the driver whose events you want to capture.
Add this as an ECMAScript action in a policy on the driver you want to log events from (e.g., your AD driver, LDAP driver, etc.):
var PolicyLogger = Packages.com.pointblue.idm.eventlogger.PolicyLogger;
// DN of the EventLoggerDriver to log through
var eventLoggerDN = "\\TREENAME\\system\\driverset\\EventLogger";
// DN of THIS driver (the one whose policy is running)
var thisDriverDN = "\\TREENAME\\system\\driverset\\ActiveDirectory";
// Get the current operation document via XPath
var xmlString = XPATH.get("/");
// Log the event — returns true on success, false on error
PolicyLogger.logEvent(eventLoggerDN, thisDriverDN, "sub", "AD-Sub-ETP", xmlString);Place the policy on whichever channel and at whichever policy point you want to capture. For example, placing it on the subscriber Event Transformation Policy of your AD driver would log every event the AD driver processes on its subscriber channel.
Parameters:
| Parameter | Description |
|---|---|
eventLoggerDN |
Full DN of the EventLoggerDriver instance to log through |
thisDriverDN |
Full DN of the driver whose policy is calling this method (stored in the srcdriver column) |
channel |
Channel context, e.g. "sub" or "pub" |
policyDN |
Name of the calling policy (for traceability in the logged JSON) |
xmlString |
The current XDS document as a string (use XPATH.get("/")) |
The method returns boolean — true if the event was logged, false if the EventLoggerDriver is not running or an error occurred. Errors are traced but never thrown, so the calling driver's policy execution is not interrupted.
Events logged through PolicyLogger are written to the same table as the Event Logger driver's own events. Two additional fields are added to the JSON for traceability:
logged-by-policy— the policy name passed tologEvent()logged-channel— the channel ("sub"or"pub")
The srcdriver column is set to the calling driver's DN (the thisDriverDN parameter). This lets you distinguish which driver an event came from and filter by source driver in the web UI. Events captured directly by the EventLoggerDriver on its own subscriber channel will have srcdriver set to the EventLoggerDriver's own DN.
logEvent() returns false if the event could not be logged. This happens when:
- The EventLoggerDriver is not running (not registered in the PolicyLogger registry)
- The database connection is down
- The XML could not be parsed or converted
Errors are traced but never thrown, so the calling driver's policy execution continues normally. By default, a failed log is silently dropped.
If you want to handle failures, check the return value. The simplest approach is fire-and-forget:
// Fire and forget — event is silently dropped on failure
PolicyLogger.logEvent(eventLoggerDN, thisDriverDN, "sub", "AD-Sub-ETP", xmlString);If event logging is important but not critical, you can log a warning:
var success = PolicyLogger.logEvent(eventLoggerDN, thisDriverDN, "sub", "AD-Sub-ETP", xmlString);
if (!success) {
java.lang.System.out.println("WARNING: Failed to log event to Event Logger");
}If event logging is critical and you want the engine to retry the event, return a retry status. This will cause the IDM engine to requeue the event on the calling driver's subscriber channel, and the policy will fire again on the next attempt:
var success = PolicyLogger.logEvent(eventLoggerDN, thisDriverDN, "sub", "AD-Sub-ETP", xmlString);
if (!success) {
status.setLevel(StatusLevel.RETRY);
status.setMessage("Event Logger unavailable, retrying");
}Caution: Using retry will stall the calling driver's event processing until the Event Logger becomes available. All queued events on that driver will back up until the retry succeeds. Only use this if logging is critical enough to block the driver over.
If you run more than one EventLoggerDriver (e.g., logging to different databases), each registers separately. Policy code references the DN of whichever Event Logger instance it wants to log through.
Find events for a DN subtree (uses the reverse index):
SELECT * FROM dxmlevent
WHERE reverse(srcdn) LIKE reverse('%\novell\Users%')
ORDER BY cachedtime;Find all events where a specific attribute was modified:
SELECT * FROM dxmlevent
WHERE eventtype = 'modify'
AND eventjson -> 'attributes' ? 'mail'
ORDER BY cachedtime;Search for a specific value in event payloads:
SELECT * FROM dxmlevent
WHERE eventjson::text ILIKE '%jdoe@example.com%';Get database size:
SELECT pg_size_pretty(pg_database_size('idmEvent'));The event table will grow indefinitely. To automatically purge old events, use pg_cron to schedule a nightly cleanup job.
pg_cron is available as a package on most PostgreSQL distributions. On Debian/Ubuntu:
sudo apt install postgresql-16-cronAdd it to postgresql.conf:
shared_preload_libraries = 'pg_cron'
cron.database_name = 'idmEvent'
Restart PostgreSQL, then enable the extension:
CREATE EXTENSION pg_cron;A simple DELETE that removes all events older than 30 days:
SELECT cron.schedule(
'purge-old-events',
'0 3 * * *', -- every day at 3:00 AM
$$DELETE FROM dxmlevent WHERE cachedtime < now() - interval '30 days'$$
);If the table is large, a single DELETE can hold locks and generate WAL traffic for an extended period. Use a batched approach that deletes in chunks of 10,000 rows with a short pause between batches. Create this function in the idmEvent database:
CREATE OR REPLACE FUNCTION purge_old_events(
retention_interval interval DEFAULT interval '30 days',
batch_size int DEFAULT 10000,
pause_ms int DEFAULT 100
)
RETURNS bigint LANGUAGE plpgsql AS $$
DECLARE
total_deleted bigint := 0;
batch_deleted bigint;
BEGIN
LOOP
DELETE FROM dxmlevent
WHERE eventid IN (
SELECT eventid FROM dxmlevent
WHERE cachedtime < now() - retention_interval
LIMIT batch_size
);
GET DIAGNOSTICS batch_deleted = ROW_COUNT;
total_deleted := total_deleted + batch_deleted;
EXIT WHEN batch_deleted = 0;
PERFORM pg_sleep(pause_ms / 1000.0);
END LOOP;
RETURN total_deleted;
END;
$$;Then schedule it with pg_cron:
SELECT cron.schedule(
'purge-old-events-batched',
'0 3 * * *',
$$SELECT purge_old_events(interval '30 days', 10000, 100)$$
);Instead of (or in addition to) age-based purging, you can trigger cleanup only when the table exceeds a size threshold. This function checks the table size first, then deletes the oldest events in batches until the table is under the target size:
CREATE OR REPLACE FUNCTION purge_events_by_size(
max_size_mb int DEFAULT 1000,
batch_size int DEFAULT 10000,
pause_ms int DEFAULT 100
)
RETURNS bigint LANGUAGE plpgsql AS $$
DECLARE
total_deleted bigint := 0;
batch_deleted bigint;
current_size bigint;
BEGIN
SELECT pg_total_relation_size('dxmlevent') INTO current_size;
IF current_size <= max_size_mb * 1024 * 1024 THEN
RETURN 0;
END IF;
LOOP
DELETE FROM dxmlevent
WHERE eventid IN (
SELECT eventid FROM dxmlevent
ORDER BY cachedtime ASC
LIMIT batch_size
);
GET DIAGNOSTICS batch_deleted = ROW_COUNT;
total_deleted := total_deleted + batch_deleted;
EXIT WHEN batch_deleted = 0;
PERFORM pg_sleep(pause_ms / 1000.0);
SELECT pg_total_relation_size('dxmlevent') INTO current_size;
EXIT WHEN current_size <= max_size_mb * 1024 * 1024;
END LOOP;
RETURN total_deleted;
END;
$$;Schedule it to check daily — it will only delete if the table exceeds the threshold (1 GB in this example):
SELECT cron.schedule(
'purge-events-by-size',
'0 4 * * *', -- every day at 4:00 AM
$$SELECT purge_events_by_size(1000, 10000, 100)$$
);Note:
pg_total_relation_sizeincludes indexes and TOAST data. After large deletes, the disk space is not returned to the OS until you runVACUUM FULLor let autovacuum reclaim it. The table will appear to shrink to new queries immediately, but the on-disk file size may lag behind.
-- List scheduled jobs
SELECT * FROM cron.job;
-- View recent job run history
SELECT * FROM cron.job_run_details ORDER BY start_time DESC LIMIT 10;
-- Remove a job
SELECT cron.unschedule('purge-old-events-batched');Copyright Point Blue Technology. This code is public domain and may be used in any way you like.

