Skip to content

feat(xpc): implement XPC service host and status management#57

Open
romanr wants to merge 1 commit intolynnswap:mainfrom
romanr:xpc
Open

feat(xpc): implement XPC service host and status management#57
romanr wants to merge 1 commit intolynnswap:mainfrom
romanr:xpc

Conversation

@romanr
Copy link
Copy Markdown

@romanr romanr commented Apr 1, 2026

XPC Servcice for real-time feedback from proxy.

Copilot AI review requested due to automatic review settings April 1, 2026 21:08
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds an XPC-based status/control surface to the proxy so external clients can receive real-time status updates (including per-session correlated request counts) and request a shutdown, plus a helper-server parent PID watcher.

Changes:

  • Introduces XPC contracts and a mach-service host that supports status fetch, client registration, and status push notifications.
  • Integrates XPC status generation + periodic “push-if-changed” broadcasting into ProxyServer.
  • Adds integration tests for XPC status aggregation/sorting and implements parent-process liveness termination in xcode-mcp-proxy-server.

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
Tests/ProxyIntegrationTests/ProxyServerTests.swift Adds test coverage for XPC status aggregation/sorting and unreachable payload behavior.
Sources/XcodeMCPProxyServer/XcodeMCPProxyServer.swift Adds detached parent PID watcher and OSLog logging for helper termination behavior.
Sources/XcodeMCPProxy/ProxyXPCServiceHost.swift Implements mach-service listener, client registry, status encoding, and broadcast-on-change.
Sources/XcodeMCPProxy/ProxyXPCContracts.swift Defines XPC mach service name, status payload models, and XPC protocols.
Sources/XcodeMCPProxy/ProxyServer.swift Wires XPC host into server lifecycle and exposes makeXPCStatus + endpoint display logic.
Comments suppressed due to low confidence (1)

Sources/XcodeMCPProxy/ProxyServer.swift:175

  • shutdownGracefully() can now be triggered via XPC (requestShutdown) and may be called more than once (or concurrently) by multiple clients. The method currently always proceeds to group.shutdownGracefully and closes channels without an idempotence guard, which can lead to double-shutdown of the EventLoopGroup / repeated cancellation work. Consider protecting shutdown with runtimeLock (or a separate lock) and returning the same in-flight future when shutdown has already started.
    public func shutdownGracefully() -> EventLoopFuture<Void> {
        xpcBroadcastTask?.cancel()
        xpcBroadcastTask = nil
        xpcServiceHost.stop()
        let promise = group.next().makePromise(of: Void.self)
        let shutdownContext = beginShutdown()
        shutdownContext.autoApprover?.stop()
        shutdownContext.sessionManager?.shutdown()
        for channel in shutdownContext.channels {

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

private let registry = ProxyXPCClientRegistry()
private let statusProvider: StatusProvider
private let shutdownHandler: ShutdownHandler
private let encoder = JSONEncoder()
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ProxyXPCServiceHost shares a single JSONEncoder instance across XPC callbacks and the periodic broadcast. JSONEncoder is not thread-safe, so concurrent fetchStatus/registerClient/pushStatusIfChanged calls can race and corrupt encoding or crash. Create a new encoder per call, or protect encoding behind a lock/serial executor.

Suggested change
private let encoder = JSONEncoder()
private var encoder: JSONEncoder { JSONEncoder() }

Copilot uses AI. Check for mistakes.
Comment on lines +55 to +66
private var isRunning = false
public init(statusProvider: @escaping StatusProvider, shutdownHandler: @escaping ShutdownHandler) {
self.statusProvider = statusProvider
self.shutdownHandler = shutdownHandler
super.init()
listener.delegate = self
}

public func start() {
guard !isRunning else { return }
isRunning = true
listener.resume()
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isRunning is read/written from different execution contexts (XPC listener threads and the NIO event loop via pushStatusIfChanged) without synchronization. This is a data race under Swift concurrency rules. Guard it with a lock/NIOLockedValueBox, or make ProxyXPCServiceHost an actor / otherwise serialize access.

Copilot uses AI. Check for mistakes.
Comment on lines +77 to +90
public func listener(_ listener: NSXPCListener, shouldAcceptNewConnection newConnection: NSXPCConnection) -> Bool {
newConnection.exportedInterface = NSXPCInterface(with: ProxyXPCControlProtocol.self)
newConnection.exportedObject = self
newConnection.remoteObjectInterface = NSXPCInterface(with: ProxyXPCClientProtocol.self)

newConnection.invalidationHandler = { [weak self, weak newConnection] in
guard let self, let newConnection else { return }
self.registry.remove(newConnection)
}

registry.add(newConnection)
newConnection.resume()
return true
}
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The listener accepts every incoming mach service connection without validating the connecting process. As written, any local process that can connect to the service can call requestShutdown (and also receive status updates), which is a straightforward DoS / information disclosure vector. Consider verifying the client via audit token / code-signing identity, restricting to the same user/session, or removing/guarding requestShutdown behind an authorization check.

Copilot uses AI. Check for mistakes.
Comment on lines +70 to +75
public func stop() {
guard isRunning else { return }
isRunning = false
listener.invalidate()
logger.info("XPC listener stopped", metadata: ["service": "\(ProxyXPCServiceConfiguration.machServiceName)"])
}
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stop() calls listener.invalidate(), which permanently invalidates an NSXPCListener (it cannot be resumed again). Since start() can be called again after stop() (it only checks isRunning), this can lead to a silent failure to restart the listener. Either document that the host is single-use, or recreate the listener on start / avoid invalidation if restart is intended.

Copilot uses AI. Check for mistakes.
Comment on lines 117 to 120
public func start() throws -> Channel {
xpcServiceHost.start()
startXPCBroadcasting()
let logger = self.logger
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

start() begins the XPC listener and schedules periodic broadcasting before the HTTP server bind succeeds. If binding throws (e.g., port in use), the method exits with an error but the XPC service remains active and the repeated task continues running. Start XPC/broadcasting only after a successful bind/install, or add a defer cleanup that stops/cancels on failure.

Copilot uses AI. Check for mistakes.
Comment on lines 10 to 24
@@ -14,4 +23,30 @@ struct XcodeMCPProxyServer {
}
exit(exitCode)
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The defer { parentWatcherTask?.cancel() } will not run on the non-zero exit path because exit(exitCode) terminates the process immediately (no stack unwinding). This is a bit misleading; consider removing the defer, or avoid exit() (e.g., by returning and handling exit codes differently) if cancellation/cleanup is intended.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants