-
Notifications
You must be signed in to change notification settings - Fork 350
readme: add high level detail around IPC #10592
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
49d6f7d
5db1993
1f8d588
22dc52c
04487d2
b9ff1a0
615581e
11f3a96
20f1e03
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,113 @@ | ||
| # IPC3 Architecture | ||
|
|
||
| This directory houses the Version 3 Inter-Processor Communication handling components. IPC3 is the older, legacy framework structure used extensively across initial Sound Open Firmware releases before the transition to IPC4 compound pipeline commands. | ||
|
|
||
| ## Overview | ||
|
|
||
| The IPC3 architecture treats streaming, DAI configurations, and pipeline management as distinct scalar events. Messages arrive containing a specific `sof_ipc_cmd_hdr` denoting the "Global Message Type" (e.g., Stream, DAI, Trace, PM) and the targeted command within that type. | ||
|
|
||
| ## Command Structure and Routing | ||
|
|
||
| Every message received is placed into an Rx buffer and initially routed to `ipc_cmd()`. Based on the `cmd` inside the `sof_ipc_cmd_hdr`, it delegates to one of the handler subsystems: | ||
|
|
||
| * `ipc_glb_stream_message`: Stream/Pipeline configuration and states | ||
| * `ipc_glb_dai_message`: DAI parameters and formats | ||
| * `ipc_glb_pm_message`: Power Management operations | ||
|
|
||
| ```mermaid | ||
| graph TD | ||
| Mailbox[IPC Mailbox Interrupt] --> Valid[mailbox_validate] | ||
| Valid --> Disp[IPC Core Dispatcher] | ||
|
|
||
| Disp -->|Global Type 1| StreamMsg[ipc_glb_stream_message] | ||
| Disp -->|Global Type 2| DAIMsg[ipc_glb_dai_message] | ||
| Disp -->|Global Type 3| PMMsg[ipc_glb_pm_message] | ||
| Disp -->|Global Type ...| TraceMsg[ipc_glb_trace_message] | ||
|
|
||
| subgraph Stream Commands | ||
| StreamMsg --> StreamAlloc[ipc_stream_pcm_params] | ||
| StreamMsg --> StreamTrig[ipc_stream_trigger] | ||
| StreamMsg --> StreamFree[ipc_stream_pcm_free] | ||
| StreamMsg --> StreamPos[ipc_stream_position] | ||
| end | ||
|
|
||
| subgraph DAI Commands | ||
| DAIMsg --> DAIConf[ipc_msg_dai_config] | ||
| end | ||
|
|
||
| subgraph PM Commands | ||
| PMMsg --> PMCore[ipc_pm_core_enable] | ||
| PMMsg --> PMContext[ipc_pm_context_save / restore] | ||
| end | ||
| ``` | ||
|
|
||
| ## Processing Flows | ||
|
|
||
| ### Stream Triggering (`ipc_stream_trigger`) | ||
|
|
||
| Triggering is strictly hierarchical via IPC3. It expects pipelines built and components fully parsed prior to active streaming commands. | ||
|
|
||
| 1. **Validation**: The IPC fetches the host component ID. | ||
| 2. **Device Lookup**: It searches the components list (`ipc_get_comp_dev`) for the PCM device matching the pipeline. | ||
| 3. **Execution**: If valid, the pipeline graph is crawled recursively and its state altered via `pipeline_trigger`. | ||
|
|
||
| ```mermaid | ||
| sequenceDiagram | ||
| participant Host | ||
| participant IPC3 as IPC3 Handler (ipc_stream_trigger) | ||
| participant Pipe as Pipeline Framework | ||
| participant Comp as Connected Component | ||
|
|
||
| Host->>IPC3: Send SOF_IPC_STREAM_TRIG_START | ||
| activate IPC3 | ||
| IPC3->>IPC3: ipc_get_comp_dev(stream_id) | ||
| IPC3->>Pipe: pipeline_trigger(COMP_TRIGGER_START) | ||
| activate Pipe | ||
| Pipe->>Comp: pipeline_for_each_comp(COMP_TRIGGER_START) | ||
| Comp-->>Pipe: Success (Component ACTIVE) | ||
| Pipe-->>IPC3: Return Status | ||
| deactivate Pipe | ||
|
|
||
| alt If Success | ||
| IPC3-->>Host: Acknowledge Success Header | ||
| else If Error | ||
| IPC3-->>Host: Acknowledge Error Header (EINVAL / EIO) | ||
| end | ||
| deactivate IPC3 | ||
| ``` | ||
|
|
||
| ### DAI Configuration (`ipc_msg_dai_config`) | ||
|
|
||
| DAI (Digital Audio Interface) configuration involves setting up physical I2S, ALH, SSP, or HDA parameters. | ||
|
|
||
| 1. **Format Unpacking**: Converts the `sof_ipc_dai_config` payload sent from the ALSA driver into an internal DSP structure `ipc_config_dai`. | ||
| 2. **Device Selection**: Identifies the exact DAI interface and finds its tracking device ID via `dai_get`. | ||
| 3. **Hardware Config**: Applies the unpacked settings directly to the hardware via the specific DAI driver's `set_config` function. | ||
|
|
||
| ```mermaid | ||
| sequenceDiagram | ||
| participant Host | ||
| participant IPC3 as IPC3 Handler (ipc_msg_dai_config) | ||
| participant DAIDev as DAI Framework (dai_get) | ||
| participant HWDriver as HW Specific Driver (e.g. SSP) | ||
|
|
||
| Host->>IPC3: Send SOF_IPC_DAI_CONFIG (e.g., SSP1, I2S Format) | ||
| activate IPC3 | ||
|
|
||
| IPC3->>IPC3: build_dai_config() | ||
| IPC3->>DAIDev: dai_get(type, index) | ||
| DAIDev-->>IPC3: pointer to dai instance | ||
|
|
||
| IPC3->>HWDriver: dai_set_config() | ||
| activate HWDriver | ||
| HWDriver-->>HWDriver: configures registers | ||
| HWDriver-->>IPC3: hardware configured | ||
| deactivate HWDriver | ||
|
|
||
| IPC3-->>Host: Acknowledged Setting | ||
| deactivate IPC3 | ||
| ``` | ||
|
|
||
| ## Mailbox and Validation (`mailbox_validate`) | ||
|
|
||
| All commands passing through this layer enforce rigid payload boundaries. `mailbox_validate()` reads the first word directly from the mailbox memory, identifying the command type before parsing parameters out of shared RAM to prevent host/DSP mismatches from cascading. | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,106 @@ | ||
| # IPC4 Architecture | ||
|
|
||
| This directory holds the handlers and topology parsing logic for Inter-Processor Communication Version 4. IPC4 introduces a significantly denser, compound-command structure heavily based around the concept of "pipelines" and dynamic "modules" rather than static DSP stream roles. | ||
|
|
||
| ## Overview | ||
|
|
||
| Unlike older iterations (IPC3) which trigger single components via scalar commands, IPC4 uses compound structures. A single host interrupt might contain batch operations like building an entire processing chain, setting module parameters sequentially, and triggering a start across multiple interconnected blocks simultaneously. | ||
|
|
||
| ## Message Handling and Dispatch | ||
|
|
||
| IPC4 messages are received via the generic IPC handler entry point `ipc_cmd()`. For IPC4 FW_GEN (global) messages, `ipc_cmd()` dispatches to `ipc4_process_glb_message()`, which then determines if the incoming payload is a true global configuration message or if it's meant to be dispatched to a specific instantiated module. | ||
|
|
||
| ```mermaid | ||
| graph TD | ||
| Mailbox[IPC Mailbox Interrupt] --> CoreIPC[ipc_cmd] | ||
|
|
||
| CoreIPC --> TypeSel[Decode IPC Message Type] | ||
| TypeSel -->|IPC4 FW_GEN| Disp[ipc4_process_glb_message] | ||
|
|
||
| Disp -->|Global Message| Global[Global Handler] | ||
| Disp -->|Module Message| Mod[Module Handler] | ||
|
|
||
| subgraph Global Handler | ||
| Global --> NewPipe[ipc4_new_pipeline] | ||
| Global --> DelPipe[ipc4_delete_pipeline] | ||
| Global --> MemMap[ipc4_process_chain_dma] | ||
| Global --> SetPipe[ipc4_set_pipeline_state] | ||
| end | ||
|
|
||
| subgraph Module Handler | ||
| Mod --> InitMod[ipc4_init_module_instance] | ||
| Mod --> SetMod[ipc4_set_module_params] | ||
| Mod --> GetMod[ipc4_get_module_params] | ||
| Mod --> Bind[ipc4_bind] | ||
| Mod --> Unbind[ipc4_unbind] | ||
| end | ||
|
Comment on lines
+30
to
+36
|
||
| ``` | ||
|
|
||
| ## Processing Flows | ||
|
|
||
| ### Pipeline State Management (`ipc4_set_pipeline_state`) | ||
|
|
||
| The core driver of graph execution in IPC4 is `ipc4_set_pipeline_state()`. This accepts a multi-stage request (e.g., `START`, `PAUSE`, `RESET`) and coordinates triggering the internal pipelines. | ||
|
|
||
| 1. **State Translation**: It maps the incoming IPC4 state request to an internal SOF state (e.g., `IPC4_PIPELINE_STATE_RUNNING` -> `COMP_TRIGGER_START`). | ||
| 2. **Graph Traversal**: It fetches the pipeline object associated with the command and begins preparing it (`ipc4_pipeline_prepare`). | ||
| 3. **Trigger Execution**: It executes `ipc4_pipeline_trigger()`, recursively changing states across the internal graphs and alerting either the LL scheduler or DP threads. | ||
|
|
||
| ```mermaid | ||
| sequenceDiagram | ||
| participant Host | ||
| participant IPC4Set as ipc4_set_pipeline_state | ||
| participant PPLPrep as ipc4_pipeline_prepare | ||
| participant PPLTrig as ipc4_pipeline_trigger | ||
| participant Comp as Graph Components | ||
|
|
||
| Host->>IPC4Set: IPC4_PIPELINE_STATE_RUNNING | ||
| activate IPC4Set | ||
|
|
||
| IPC4Set->>PPLPrep: Maps to COMP_TRIGGER_START | ||
| PPLPrep->>Comp: Applies PCM params & formatting | ||
| Comp-->>PPLPrep: Components ready | ||
|
|
||
| IPC4Set->>PPLTrig: execute trigger | ||
| PPLTrig->>Comp: pipeline_trigger(COMP_TRIGGER_START) | ||
| Comp-->>PPLTrig: Success | ||
|
|
||
| IPC4Set-->>Host: Reply: ipc4_send_reply() | ||
| deactivate IPC4Set | ||
| ``` | ||
|
|
||
| ### Module Instantiation and Binding (`ipc4_bind`) | ||
|
|
||
| In IPC4, modules (components) are bound together dynamically rather than constructed statically by the firmware at boot time. | ||
|
|
||
| 1. **Instantiation**: `ipc4_init_module_instance()` allocates the module via the DSP heap arrays based on UUIDs. | ||
| 2. **Binding**: `ipc4_bind()` takes two module IDs and dynamically connects their sink and source pins using intermediate `comp_buffer` objects. | ||
|
|
||
| ```mermaid | ||
| sequenceDiagram | ||
| participant Host | ||
| participant IPC4Bind as ipc4_bind | ||
| participant SrcMod as Source Module | ||
| participant SinkMod as Sink Module | ||
| participant Buff as Connection Buffer | ||
|
|
||
| Host->>IPC4Bind: Bind Src(ID) -> Sink(ID) | ||
| activate IPC4Bind | ||
|
|
||
| IPC4Bind->>SrcMod: Locate by ID | ||
| IPC4Bind->>SinkMod: Locate by ID | ||
|
|
||
| IPC4Bind->>Buff: buffer_new() (Create Intermediate Storage) | ||
|
|
||
| IPC4Bind->>SrcMod: Bind source pin to Buff (via comp_bind/comp_buffer_connect) | ||
| IPC4Bind->>SinkMod: Bind sink pin to Buff (via comp_bind/comp_buffer_connect) | ||
|
|
||
| IPC4Bind-->>Host: Reply: Linked | ||
| deactivate IPC4Bind | ||
| ``` | ||
|
|
||
| ## Compound Messages (`ipc_wait_for_compound_msg`) | ||
|
|
||
| To accelerate initialization, IPC4 enables Compound commands. A host can send multiple IPC messages chained back-to-back using a single mailbox trigger flag before waiting for ACKs. | ||
|
|
||
| `ipc_compound_pre_start` and `ipc_compound_post_start` manage this batch execution safely without overflowing the Zephyr work queues or breaking hardware configurations during intermediate states. | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,98 @@ | ||
| # Inter-Processor Communication (IPC) Core Architecture | ||
|
|
||
| This directory contains the common foundation for all Inter-Processor Communication (IPC) within the Sound Open Firmware (SOF) project. It bridges the gap between hardware mailbox interrupts and the version-specific (IPC3/IPC4) message handlers. | ||
|
|
||
| ## Overview | ||
|
|
||
| The Core IPC layer is completely agnostic to the specific structure or content of the messages (whether they are IPC3 stream commands or IPC4 pipeline messages). Its primary responsibilities are: | ||
|
|
||
| 1. **Message State Management**: Tracking if a message is being processed, queued, or completed. | ||
| 2. **Interrupt Bridging**: Routing incoming platform interrupts into the Zephyr or SOF thread-domain scheduler. | ||
| 3. **Queueing**: Safe traversal and delayed processing capabilities via `k_work` items or SOF scheduler tasks. | ||
| 4. **Platform Acknowledgment**: Signaling the hardware mailbox layers to confirm receipt or signal completion out entirely. | ||
|
|
||
| ## Architecture Diagram | ||
|
|
||
| The basic routing of any IPC message moves from a hardware interrupt, through the platform driver, into the core IPC handlers, and ultimately up to version-specific handlers. | ||
|
|
||
| ```mermaid | ||
| graph TD | ||
| Platform[Platform / Mailbox HW] -->|IRQ| CoreIPC[Core IPC Framework] | ||
|
|
||
| subgraph CoreIPC [src/ipc/ipc-common.c] | ||
| Queue[Msg Queue / Worker Task] | ||
| Dispatcher[IPC Message Dispatcher] | ||
| PM[Power Management Wait/Wake] | ||
|
|
||
| Queue --> Dispatcher | ||
| Dispatcher --> PM | ||
| end | ||
|
|
||
| Dispatcher -->|Version Specific Parsing| IPC3[IPC3 Handler] | ||
| Dispatcher -->|Version Specific Parsing| IPC4[IPC4 Handler] | ||
|
|
||
| IPC3 -.-> CoreIPC | ||
| IPC4 -.-> CoreIPC | ||
| CoreIPC -.->|Ack| Platform | ||
| ``` | ||
|
|
||
| ## Processing Flow | ||
|
|
||
| When the host writes a command to the IPC mailbox and triggers an interrupt, the hardware-specific driver (`src/platform/...`) catches the IRQ and eventually calls down into the IPC framework. | ||
|
|
||
| Different RTOS environments (Zephyr vs. bare metal SOF native) handle the thread handoff differently. In Zephyr, this leverages the `k_work` queues heavily for `ipc_work_handler`. | ||
|
|
||
| ### Receiving Messages (Host -> DSP) | ||
|
|
||
| ```mermaid | ||
| sequenceDiagram | ||
| participant Host | ||
| participant Platform as Platform Mailbox (IRQ) | ||
| participant CoreIPC as Core IPC Worker | ||
| participant Handler as Version-Specific Handler (IPC3/4) | ||
|
|
||
| Host->>Platform: Writes Mailbox, Triggers Interrupt | ||
| activate Platform | ||
| Platform->>CoreIPC: ipc_schedule_process() | ||
| deactivate Platform | ||
|
|
||
| Note over CoreIPC: Worker thread wakes up | ||
|
|
||
| activate CoreIPC | ||
| CoreIPC->>Platform: ipc_platform_wait_ack() (Optional blocking) | ||
| CoreIPC->>Handler: version_specific_command_handler() | ||
|
|
||
| Handler-->>CoreIPC: Command Processed (Status Header) | ||
| CoreIPC->>Platform: ipc_complete_cmd() | ||
| Platform-->>Host: Signals Completion Mailbox / IRQ | ||
| deactivate CoreIPC | ||
| ``` | ||
|
|
||
| ### Sending Messages (DSP -> Host) | ||
|
|
||
| Firmware-initiated messages (like notifications for position updates, traces, or XRUNs) rely on a queue if the hardware is busy. | ||
|
|
||
| ```mermaid | ||
| sequenceDiagram | ||
| participant DSP as DSP Component (e.g. Pipeline Tracker) | ||
| participant Queue as IPC Message Queue | ||
| participant Platform as Platform Mailbox | ||
|
|
||
| DSP->>Queue: ipc_msg_send() / ipc_msg_send_direct() | ||
| activate Queue | ||
| Queue-->>Queue: Add to Tx list (if BUSY) | ||
| Queue->>Platform: Copy payload to mailbox and send | ||
|
|
||
| alt If host is ready | ||
| Platform-->>Queue: Success | ||
| Queue->>Platform: Triggers IRQ to Host | ||
| else If host requires delayed ACKs | ||
| Queue-->>DSP: Queued pending prior completion | ||
| end | ||
| deactivate Queue | ||
| ``` | ||
|
|
||
| ## Global IPC Objects and Helpers | ||
|
|
||
| * `ipc_comp_dev`: Wrapper structure linking generic devices (`comp_dev`) specifically to their IPC pipeline and endpoint identifiers. | ||
| * `ipc_get_comp_dev` / `ipc_get_ppl_comp`: Lookup assistants utilizing the central graph tracking to find specific components either directly by component ID or by traversing the pipeline graph starting from a given `pipeline_id` and direction (upstream/downstream). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The sequence diagram calls
ipc_get_comp_dev(stream_id), but the actual call inipc_stream_trigger()isipc_get_comp_by_id(ipc, stream.comp_id)(andipc_get_comp_devhas a different signature). Please adjust the diagram to match the implementation or keep it function-name-agnostic.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot open a new pull request to apply changes based on this feedback