Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
124 changes: 92 additions & 32 deletions scripts/Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,102 +6,162 @@ This folder contains a lot of useful scripts that can speed up development for r

SOF has several build targets depending on whether you are building firmware, tooling, documentation or topologies. This directory has a helper for each.

### Firmware
### Firmware Build (`xtensa-build-zephyr.py`)

Firmware can either be built using west command directly or by the xtensa-build-zephyr.py script. This script wraps up the west commands and can build using either the Zephyr SDK compiler or the Cadence xtensa compiler for xtensa targets.
Firmware can either be built using west commands directly or by the `xtensa-build-zephyr.py` script. This script wraps up the west commands and can build using either the Zephyr SDK compiler or the Cadence xtensa compiler for xtensa targets.

Please run the script with --help to see all options.
Please run the script with `--help` to see all options.

E.g to build SOF for Intel Pantherlake:

1) Enable the python virtual environment for west. This should be in your SOF workspace installation direction. Default is ```~/work/sof``` (only needs run once).
1) Enable the python virtual environment for west. This should be in your SOF workspace installation directory. Default is `~/work/sof` (only needs run once).

```bash
source ~/work/sof/.venv/bin/activate
```
2) Now run the build script. *Note: most build errors are a result of ingredients being out of sync with the west manifest. Please run ```west update``` and rebuild before fixing/reporting build errors.*

2) Now run the build script. *Note: most build errors are a result of ingredients being out of sync with the west manifest. Please run `west update` and rebuild before fixing/reporting build errors.*

```bash
./scripts/xtensa-build-zephyr.py -p ptl
```

### Testbench
### Reproducible Output Builds (`test-repro-build.sh`)

Testbench is a host application that is used to run SOF processing modules on developers PC. This allows for module development using regular host based tooling.
This script can be used to locally reproduce the exact build steps and environment of specific CI validation tests.

Please run
```bash
./rebuild-testbench.sh --help
./scripts/test-repro-build.sh --help
```
for full options.

Testbench can be also be built for Cadence simulator targets.

### Tools and Topologies
## Tools and Topologies

Tooling and topology can be built together using one script. To build all topologies please run:

```bash
./scripts/build-tools.sh
```

This script can build:
1) sof-ctl
2) sof-logger
3) probes
4) all topology 1 & 2 and test topologies.
5) Local ALSA git version for alsa-lib and alsa-utils that have features not yet in distro version of ALSA packages.
**Options for `build-tools.sh`:**

* `-c` : Rebuild `ctl/` tool
* `-l` : Rebuild `logger/` tool
* `-p` : Rebuild `probes/` tool
* `-T` : Rebuild ALL `topology/` targets
* `-X` : Rebuild topology1 only
* `-Y` : Rebuild topology2 only
* `-t` : Rebuild test topologies
* `-A` : Clone and rebuild the local ALSA git version for `alsa-lib` and `alsa-utils` with latest non-distro features.
* `-C` : No build, only CMake re-configuration. Shows CMake targets.

*Warning: building tools is incremental by default. To build from scratch delete the `tools/build_tools` directory or use `-C`.*

### ALSA Specific Build (`build-alsa-tools.sh`)

If you want to pull down and explicitly recompile only the associated ALSA libraries from their public `alsa-lib` GitHub upstream independently of SOF topologies:

```bash
./scripts/build-alsa-tools.sh
```

## Testbench and Emulation

Testbench is a host application that is used to run SOF processing modules on developers PC. This allows for module development using regular host based tooling.

### Rebuilding the Testbench (`rebuild-testbench.sh`)

This script cleans and rebuilds the host test application binary. Ensure you supply the correct target platform wrapper or fuzzing backend.

**Usage Options:**

* `-p <platform>` : Build testbench binary for `xt-run` for selected platform (e.g. `-p tgl`). When omitted, performs a `BUILD_TYPE=native`, compile-only check.
* `-f` : Build testbench via a compiler provided by a fuzzer (default path: `.../AFL/afl-gcc`).
* `-j` : Number of parallel make jobs (defaults to `nproc`).

### Running the Testbench (`host-testbench.sh`)

Starts the testing sequences. This invokes specific components to ensure basic inputs process without segfaults.

```bash
./scripts/host-testbench.sh
```

### QEMU Check (`qemu-check.sh`)

Automated verifier for executing firmware builds under QEMU emulation.

**Usage:**

```bash
./scripts/qemu-check.sh [platform(s)]
```

* Supported platforms are: `imx8`, `imx8x`, `imx8m`.
* Runs all supported platforms by default if none are provided.

## SDK Support

There is some SDK support in this directory for speeding up or simplifying tasks with multiple steps.

### New Modules
### New Modules (`sdk-create-module.py`)

A new module can be created by running the SDK Create Module script. This python helper copies the SOF template audio module and renames all strings, Cmakefiles, and Kconfigs automatically. It also correctly registers a new DSP UUID and TOML entries.

A new module can be created by running the sdk-create-module script. This script will copy the template module and rename all strings, Cmakefiles, Kconfigs to match the new module. It will also create a UUID for the new module and a TOML manifest entry (for targets that need this).
Please run:

Please run
```bash
./sdk-create-module.py new_module_name
./scripts/sdk-create-module.py new_module_name
```

## Docker

The docker container provided in docker_build sets up a build environment for
building Sound Open Firmware. A working docker installation is needed to run
the docker build container.
The docker container provided in `docker_build` sets up a build environment for building Sound Open Firmware. A working docker installation is needed to run the docker build container.

*Note: In order to run docker as non sudo/root user please run:*

Note: In order to run docker as non sudo/root user please run.
```bash
sudo usermod -aG docker your-user-name
```

Then logout and login again.

Quick Start:
**Quick Start:**

First, build the docker container. This step needs to be done initially and when the toolchain or ALSA dependencies are updated.

First, build the docker container. This step needs to be done initially and
when the toolchain or alsa dependencies are updated.
```bash
cd scripts/docker_build
./docker-build.sh
```

After the container is built, it can be used to run the scripts.

To build for tigerlake:

```bash
./scripts/docker-run.sh ./scripts/xtensa-build-all.sh -l tgl
```
or (may need password test0000 for rimage install)

or (this command may prompt for a password during rimage installation inside the container)

```bash
./scripts/docker-run.sh ./scripts/xtensa-build-all.sh tgl
```

To rebuild the topology and logger:

```bash
./scripts/docker-run.sh ./scripts/build-tools.sh
```
An incremental sof.git build:

An incremental `sof.git` build:

```bash
./scripts/docker-run.sh make
```

Or enter a shell:

```bash
./scripts/docker-run.sh bash
```
61 changes: 61 additions & 0 deletions src/debug/debug_stream/readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# SOF Debug Stream

The `debug_stream` framework is an abstract logging and live-data streaming mechanism allowing the DSP to asynchronously push structured or freeform diagnostic records immediately out to the host.

## Feature Overview

Unlike standard tracing (`mtrace`), which requires buffering and complex host parsing logic often tied directly to pipeline topologies or ALSA interfaces, the `debug_stream` bypasses the audio framework entirely. It utilizes the dedicated IPC Memory Windows (specifically the debug slot) to write data.

The stream is particularly useful for reporting:

1. **Thread Information:** Real-time data from Zephyr OS threads (like CPU runtime, context switch frequencies, or stack high-water marks).
2. **Text Messages (`ds_msg`):** Lightweight string prints that bypass the standard heavily-formatted logger.

## How to Enable

These features are disabled by default to save firmware footprint. You can enable them via Kconfig:

* `CONFIG_SOF_DEBUG_STREAM_SLOT=y` : Master switch. Reserves exactly one Memory Window 4k block (default Slot 3) mapping to host space.
* `CONFIG_SOF_DEBUG_STREAM_THREAD_INFO=y` : Activates the Zephyr thread statistics compiler integration (`INIT_STACKS`, `THREAD_MONITOR`).
* `CONFIG_SOF_DEBUG_STREAM_TEXT_MSG=y` : Allows calling `ds_msg("...", ...)` scattered throughout DSP C code to emit plain strings.

## Architecture

The architecture revolves around a "Slot" abstraction where data is copied sequentially over a ringbuffer into the ADSP debug window slot used for the debug stream (mapped over PCIe/SPI for the Host to read non-destructively).

```mermaid
graph TD
subgraph SOF Firmware
SysEvent["System Event / OS Timer"] --> |Triggers| DSThread["Thread Info Collector"]
DevCode["Developer Code"] --> |"ds_msg()"| Text["Text Subsystem"]

DSThread --> Formatter[DS Formatter]
Text --> Formatter

Formatter --> Slot[Memory Window Slot 3]
end

subgraph Host System
PyTool[tools/debug_stream/debug_stream.py]
Slot -.->|PCIe DMA / IPC Memory| PyTool
PyTool --> |Stdout| User[Developer Terminal]
end
```

## Usage Example

If you enable `CONFIG_SOF_DEBUG_STREAM_TEXT_MSG=y`, developers can insert rapid debug markers without setting up topology traces:

```c
#include <user/debug_stream_text_msg.h>

void my_function() {
ds_msg("Reached tricky initialization state! Value: %d", some_val);
}
```

On the host machine, you extract this continuous output stream by running the provided SOF tooling:

```bash
python3 tools/debug_stream/debug_stream.py
```
65 changes: 65 additions & 0 deletions src/debug/gdb/readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# GDB Remote Debugging Stub

The Sound Open Firmware (SOF) project carries a GNU Debugger (GDB) stub directly integrated with the framework's exception handlers. This translates commands sent by a GDB client (running on the host Linux machine) into architecture-specific logic.

## Feature Overview

Instead of completely relying on complex JTAG setups, developers can use this stub to dynamically introspect panic states, stack traces, and variable states during firmware execution, particularly inside isolated SoC DSP cores.

When the firmware faults or hits a defined breakpoint, the exception vector routes control into this stub. It then waits for GDB Remote Protocol packet streams (ASCII formatted over the SOF mailbox/shared memory window). The Host reads these mailbox slots and pushes/pulls responses to its active GNU Debugger session.

## Architecture

Data moves between the Host GDB environment, the physical mailboxes bounding the DSP domain, the DSP firmware's built-in stub, and the active exception state.

```mermaid
sequenceDiagram
participant GdbSession as Host GDB Client
participant IPC as SOF Driver / ALSA
participant Stub as Firmware GDB Stub (gdb_parser)
participant HW as DSP Context Registers

HW-->>Stub: Hard Fault / Breakpoint Hit
activate Stub

Note over Stub: Stores fault context (sregs/aregs)

Stub-->>IPC: Write Mailbox: Breakpoint Notification

GdbSession->>IPC: Send Packet (e.g., $g#67 to Read Regs)
IPC->>Stub: Passes 'g' string

Stub->>HW: Reads requested register values
HW-->>Stub: Values

Stub->>Stub: mem_to_hex() formatting
Stub-->>GdbSession: Returns hex payload via Mailbox

GdbSession->>IPC: Send Packet ($c#63 to Continue)
IPC->>Stub: Process 'c'
Stub->>HW: Restores Context, Retains Execution
deactivate Stub
```

## How to Enable

A basic GDB debugging configuration is exposed via Kconfig and must be explicitly bound:

* `CONFIG_GDB_DEBUG`: Needs to be toggled `=y` to compile `src/debug/gdb/gdb.c` into the main application.

Additionally, the overarching architecture requires the corresponding Exception vectors to be rewritten. In Zephyr OS based builds (which currently drive native architectures), fatal exception handling must be configured to pass register dumps recursively to `gdb_handle_exception()`.

## Usage and Protocols

The protocol adheres precisely to the standard GDB remote serial specification. Each string packet expects the format:

`$<packet-data>#<check-sum>`

Supported Command Handlers inside the Stub:

* `g` (Read all registers) / `G` (Write all registers)
* `m` (Read memory) / `M` (Write memory)
* `p` (Read specific register) / `P` (Write specific register)
* `v` (Query architecture/support details like `vCont`)
* `c` / `s` (Continue execution / Single-step)
* `z` / `Z` (Insert/Remove breakpoints)
48 changes: 48 additions & 0 deletions src/debug/telemetry/readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# Telemetry and Performance Measurements

The SOF Telemetry subsystem is a suite of built-in diagnostics measuring code execution efficiencies, cycle overheads, and hardware I/O throughput.

## Feature Overview

Latency and real-time execution bounds are critical in DSP firmware. The telemetry feature provides mechanisms to monitor these bounds accurately without intrusive breakpoints or slowing down the pipeline too aggressively.

Capabilities include:

1. **Component Performance Tracking**: For every instantiated component in the graph, it measures the pure execution time bounds (min/max/average) of that component's `comp_copy()` routines.
2. **I/O Throughput Tracking**: Measures hardware bus speeds or message handling by counting bytes, state changes, or tokens across distinct interfaces: IPC, DMIC, I2S, HD/A, I2C, SPI, etc.
3. **Zephyr Systick Measurement**: Specifically tracks the overall scheduler overhead bounding RTOS ticks.

Measurements are batched into a ringbuffer locally, then synced across mapped ADSP memory windows into user space, limiting the impact on the active instruction cache.

## Architecture

The architecture bridges the component layer (like pure IPC or Audio Component wrappers) directly into independent statistics accumulators.

```mermaid
graph TD
subgraph DSP Environment
Comp[Audio Component X] --> |"comp_copy() Execution"| Telemetry[perf_measure_execute]
HW[I2S / DMIC HW Driver] --> |"State Change Count"| Telemetry

Telemetry --> RingBuffer[Statistics Ringbuffer]
RingBuffer --> Sync["Memory Window 3 (perf_data_sync)"]
end

subgraph Host Userspace
Dev[sof-logger / IPC Tooling] --> |Reads/Queries| Sync
end
```

## How to Enable

Telemetry depends strictly on NOT being built inside a host-userspace environment simulator (`depends on !SOF_USERSPACE_LL`). Ensure your target is a physical or emulated DSP target.

Settings to configure in `Kconfig`:

* `CONFIG_SOF_TELEMETRY=y` : Enable the overarching telemetry interfaces, giving you systick and basic task metrics over Memory Window 2 interfaces.
* `CONFIG_SOF_TELEMETRY_PERFORMANCE_MEASUREMENTS=y` : Adds granular tracking to audio components (creating the explicit `telemetry.c` ringbuffer maps via Memory Window 3 slots). Be aware that only a specific configured amount (`PERFORMANCE_DATA_ENTRIES_COUNT`) can be actively tracked due to RAM constraints.
* `CONFIG_SOF_TELEMETRY_IO_PERFORMANCE_MEASUREMENTS=y` : Instructs hardware and communication buses to start pumping data into the metrics collector.

## Extracting Data

You can fetch these metrics via `sof-logger` or standard IPC interrogation tools that support polling the corresponding debug window slots mapped for your particular platform's `ADSP_MW`.
Loading
Loading