feat(xpc): implement XPC service host and status management#57
feat(xpc): implement XPC service host and status management#57romanr wants to merge 1 commit intolynnswap:mainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
This PR adds an XPC-based status/control surface to the proxy so external clients can receive real-time status updates (including per-session correlated request counts) and request a shutdown, plus a helper-server parent PID watcher.
Changes:
- Introduces XPC contracts and a mach-service host that supports status fetch, client registration, and status push notifications.
- Integrates XPC status generation + periodic “push-if-changed” broadcasting into
ProxyServer. - Adds integration tests for XPC status aggregation/sorting and implements parent-process liveness termination in
xcode-mcp-proxy-server.
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| Tests/ProxyIntegrationTests/ProxyServerTests.swift | Adds test coverage for XPC status aggregation/sorting and unreachable payload behavior. |
| Sources/XcodeMCPProxyServer/XcodeMCPProxyServer.swift | Adds detached parent PID watcher and OSLog logging for helper termination behavior. |
| Sources/XcodeMCPProxy/ProxyXPCServiceHost.swift | Implements mach-service listener, client registry, status encoding, and broadcast-on-change. |
| Sources/XcodeMCPProxy/ProxyXPCContracts.swift | Defines XPC mach service name, status payload models, and XPC protocols. |
| Sources/XcodeMCPProxy/ProxyServer.swift | Wires XPC host into server lifecycle and exposes makeXPCStatus + endpoint display logic. |
Comments suppressed due to low confidence (1)
Sources/XcodeMCPProxy/ProxyServer.swift:175
shutdownGracefully()can now be triggered via XPC (requestShutdown) and may be called more than once (or concurrently) by multiple clients. The method currently always proceeds togroup.shutdownGracefullyand closes channels without an idempotence guard, which can lead to double-shutdown of theEventLoopGroup/ repeated cancellation work. Consider protecting shutdown withruntimeLock(or a separate lock) and returning the same in-flight future when shutdown has already started.
public func shutdownGracefully() -> EventLoopFuture<Void> {
xpcBroadcastTask?.cancel()
xpcBroadcastTask = nil
xpcServiceHost.stop()
let promise = group.next().makePromise(of: Void.self)
let shutdownContext = beginShutdown()
shutdownContext.autoApprover?.stop()
shutdownContext.sessionManager?.shutdown()
for channel in shutdownContext.channels {
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| private let registry = ProxyXPCClientRegistry() | ||
| private let statusProvider: StatusProvider | ||
| private let shutdownHandler: ShutdownHandler | ||
| private let encoder = JSONEncoder() |
There was a problem hiding this comment.
ProxyXPCServiceHost shares a single JSONEncoder instance across XPC callbacks and the periodic broadcast. JSONEncoder is not thread-safe, so concurrent fetchStatus/registerClient/pushStatusIfChanged calls can race and corrupt encoding or crash. Create a new encoder per call, or protect encoding behind a lock/serial executor.
| private let encoder = JSONEncoder() | |
| private var encoder: JSONEncoder { JSONEncoder() } |
| private var isRunning = false | ||
| public init(statusProvider: @escaping StatusProvider, shutdownHandler: @escaping ShutdownHandler) { | ||
| self.statusProvider = statusProvider | ||
| self.shutdownHandler = shutdownHandler | ||
| super.init() | ||
| listener.delegate = self | ||
| } | ||
|
|
||
| public func start() { | ||
| guard !isRunning else { return } | ||
| isRunning = true | ||
| listener.resume() |
There was a problem hiding this comment.
isRunning is read/written from different execution contexts (XPC listener threads and the NIO event loop via pushStatusIfChanged) without synchronization. This is a data race under Swift concurrency rules. Guard it with a lock/NIOLockedValueBox, or make ProxyXPCServiceHost an actor / otherwise serialize access.
| public func listener(_ listener: NSXPCListener, shouldAcceptNewConnection newConnection: NSXPCConnection) -> Bool { | ||
| newConnection.exportedInterface = NSXPCInterface(with: ProxyXPCControlProtocol.self) | ||
| newConnection.exportedObject = self | ||
| newConnection.remoteObjectInterface = NSXPCInterface(with: ProxyXPCClientProtocol.self) | ||
|
|
||
| newConnection.invalidationHandler = { [weak self, weak newConnection] in | ||
| guard let self, let newConnection else { return } | ||
| self.registry.remove(newConnection) | ||
| } | ||
|
|
||
| registry.add(newConnection) | ||
| newConnection.resume() | ||
| return true | ||
| } |
There was a problem hiding this comment.
The listener accepts every incoming mach service connection without validating the connecting process. As written, any local process that can connect to the service can call requestShutdown (and also receive status updates), which is a straightforward DoS / information disclosure vector. Consider verifying the client via audit token / code-signing identity, restricting to the same user/session, or removing/guarding requestShutdown behind an authorization check.
| public func stop() { | ||
| guard isRunning else { return } | ||
| isRunning = false | ||
| listener.invalidate() | ||
| logger.info("XPC listener stopped", metadata: ["service": "\(ProxyXPCServiceConfiguration.machServiceName)"]) | ||
| } |
There was a problem hiding this comment.
stop() calls listener.invalidate(), which permanently invalidates an NSXPCListener (it cannot be resumed again). Since start() can be called again after stop() (it only checks isRunning), this can lead to a silent failure to restart the listener. Either document that the host is single-use, or recreate the listener on start / avoid invalidation if restart is intended.
| public func start() throws -> Channel { | ||
| xpcServiceHost.start() | ||
| startXPCBroadcasting() | ||
| let logger = self.logger |
There was a problem hiding this comment.
start() begins the XPC listener and schedules periodic broadcasting before the HTTP server bind succeeds. If binding throws (e.g., port in use), the method exits with an error but the XPC service remains active and the repeated task continues running. Start XPC/broadcasting only after a successful bind/install, or add a defer cleanup that stops/cancels on failure.
| @@ -14,4 +23,30 @@ struct XcodeMCPProxyServer { | |||
| } | |||
| exit(exitCode) | |||
There was a problem hiding this comment.
The defer { parentWatcherTask?.cancel() } will not run on the non-zero exit path because exit(exitCode) terminates the process immediately (no stack unwinding). This is a bit misleading; consider removing the defer, or avoid exit() (e.g., by returning and handling exit codes differently) if cancellation/cleanup is intended.
XPC Servcice for real-time feedback from proxy.