dax: use atomic flags to avoid race conditions#10580
dax: use atomic flags to avoid race conditions#10580checkupup wants to merge 1 commit intothesofproject:mainfrom
Conversation
|
Can one of the admins verify this patch?
|
|
Hello @johnylin76 @lgirdwood , let us continue our discussion on race condition issue here. |
| DAX_FLAG_READ_AND_CLEAR, | ||
| }; | ||
|
|
||
| static int32_t flag_process(struct dax_adapter_data *adapter_data, |
There was a problem hiding this comment.
I think zephyr already has an implementation of atomic mask (e.g. atomic_test_and_set_bit). Did you consider it?
There was a problem hiding this comment.
Yes, Zephyr provides a wealth of compelling bitwise operations functions, but for detailed reasons, please refer to #10580 (comment)
| { | ||
| switch (opt_mode) { | ||
| case DAX_FLAG_READ: | ||
| return atomic_read(&adapter_data->proc_flags[ffs(flag) - 1]); |
There was a problem hiding this comment.
You could just define indexes of individual bits instead of masks:
define DAX_DEVICE_MASK 0x4 -> define DAX_DEVICE_IDX 2
This would eliminate ffs
There was a problem hiding this comment.
I prefer to preserve the bitwise operation characteristics of masks, which allows me to:
- Retain the original code logic as much as possible
- Conveniently pass multiple masks within the same variable when needed, e.g., masks = (DAX_DEVICE_MASK | DAX_PROFILE_MASK)
- Easily modify the code to use
atomic_orandatomic_andoperations if we fully transition to the Zephyr environment in the future
|
|
||
| struct dax_adapter_data { | ||
| struct sof_dax dax_ctx; | ||
| atomic_t proc_flags[32]; |
There was a problem hiding this comment.
@checkupup are you able to provide a little more context, as I can now see an array of data instead of a global flag. My understanding was that we would only have 2 instances and had to protect a single shared resource between both DP instances ?
i.e. we can use https://docs.zephyrproject.org/latest/doxygen/html/group__atomic__apis.html#gaef275bd78e093e5b5177ef725d0f570a
bool atomic_cas(atomic_t *target, atomic_val_t old_value, atomic_val_t new_value);
/* is resource mine ? */
owner_id = atomic_get(atomic_var);
if (owner_id == MY_INSTANCE_ID_VALUE) {
// I can freely use the resource
} else {
/* i dont own it, try and get resource */
is_resource_mine = atomic_cas(atomic_var, FREE_VALUE, MY_INSTANCE_ID_VALUE);
if (!is_resource_mine)
yield(); // try again when we are next scheduled
}
// I can use resource at this point
...
/* put resource */
is_resource_mine = atomic_cas(atomic_var, MY_INSTANCE_ID_VALUE, FREE_VALUE);
if (!is_resource_mine)
// should never fail - handle.where each instance would have a unique id for the atomic variable.
There was a problem hiding this comment.
This change does not involve modifications to shared instances; it aims to prevent race condition on dax_ctx->update_flags.
For example, when executing dax_ctx->update_flags |= DAX_PROCESSING_MASK; in sof_dax_process, the DP thread may be preempted by the LL thread. At this point, the LL thread executes the sof_dax_reset operation and compares dax_ctx->update_flags & DAX_PROCESSING_MASK. At this point, the previous |= operation may not have completed its assignment. The result of dax_ctx->update_flags & DAX_PROCESSING_MASK in LL thread could be false, causing the instance resource to be released. When the DP thread resumes execution, it completes the assignment of DAX_PROCESSING_MASK and continues calling dax_process to handle data. Since the instance resource has already been released, this call will result in undefined behavior. The same issue may also occur in check_and_update_settings and check_and_update_state.
As mentioned earlier, once sof_dax_prepare, sof_dax_set_configuration, and sof_dax_reset are moved to userspace, this race protection is no longer needed since they run on the same thread as sof_dax_process. The main branch has already merged the userspace changes, but to align with the ptl-*-drop-stable branch, this change still needs to be merged first.
|
@lgirdwood @wjablon1 Zephyr supports numerous compelling operations such as |
In DP domain, sof_dax_process runs in a DP thread, other sof_dax_* functions run in a different LL thread. Using atomic flags instead of original dax_ctx->update_flags to avoid potential race conditions when updating flags. Since dax_inf.h should not have any dependencies on SOF, we define the new atomic flags in dax.c Signed-off-by: Jun Lai <jun.lai@dolby.com>
e8f2f72 to
4fa9389
Compare
|
Currently, the |
The reason for the ask to atomic ops is that there is a cost around mutex and spinlocks when built as a userspace module, i.e. there needs to be a context switch between user->kernel and back again. However, the atomic API is ISA non privileged so there is no cost between user/kernel and its cheaper than a mutex. Please see below, it look like we should add this into Zephyr. #include <zephyr/types.h>
#include <zephyr/toolchain.h>
/**
* @brief Atomic bitwise OR for Xtensa
* * @param target Pointer to the atomic variable
* @param value The mask to OR with the target
* @return The value of the target BEFORE the OR operation
*/
static ALWAYS_INLINE atomic_val_t xtensa_atomic_or(atomic_t *target, atomic_val_t value)
{
atomic_val_t old_val, new_val;
int success;
__asm__ volatile (
"1: l32i %0, %2, 0 \n\t" // Load current value from target
" or %1, %0, %3 \n\t" // new_val = old_val | value
" wsr %0, scompare1 \n\t" // Set SCOMPARE1 for the comparison
" s32c1i %1, %2, 0 \n\t" // Conditionally store new_val if target == SCOMPARE1
" bne %1, %0, 1b \n\t" // If store failed (value changed), retry loop
: "=&a" (old_val), "=&a" (new_val)
: "a" (target), "a" (value)
: "memory"
);
return old_val;
} |
I would not expect the compatibility strictly as yours due to the fact that we only consider the DP mode is available on Zephyr, not on XTOS. The scheduling strategies are different. IIUC, the DP mode (the multi-threading scheme to be precise) cannot be supported on XTOS, hence the scheduling strategies are different from the basis. Take GoogleRTCAudioProc (a.k.a. AEC) as an example. The pain point is that requires 10ms processing window internally, so DP mode is used on Zephyr-OS platform. As for the XTOS solution, we have to elongate the entire pipeline period to 10ms to make AEC work. There is no synchronous problem since all module APIs are synchronous. |
|
Thank you all. From the above discussion, I understand that:
Below is an implementation example. Could you provide a quick review? struct dax_adapter_data {
struct sof_dax dax_ctx;
atomic_t proc_flags;
};
static int32_t flag_process(struct dax_adapter_data *adapter_data,
uint32_t flag,
enum dax_flag_opt_mode opt_mode)
{
#ifdef __ZEPHYR__
int32_t bit = ffs(flag) - 1;
switch (opt_mode) {
case DAX_FLAG_READ:
return atomic_test_bit(&adapter_data->proc_flags, bit);
case DAX_FLAG_SET:
atomic_set_bit(&adapter_data->proc_flags, bit);
break;
case DAX_FLAG_CLEAR:
atomic_clear_bit(&adapter_data->proc_flags, bit);
break;
case DAX_FLAG_READ_AND_CLEAR:
return atomic_test_and_clear_bit(&adapter_data->proc_flags, bit);
default:
break;
}
#else
/* Non-Zephyr builds run single-threaded (no DP mode), there is no synchronous problem */
int32_t old_flags = atomic_read(&adapter_data->proc_flags);
switch (opt_mode) {
case DAX_FLAG_READ:
return (old_flags & flag) != 0;
case DAX_FLAG_SET:
atomic_set(&adapter_data->proc_flags, old_flags | flag);
break;
case DAX_FLAG_CLEAR:
atomic_set(&adapter_data->proc_flags, old_flags & ~flag);
break;
case DAX_FLAG_READ_AND_CLEAR:
atomic_set(&adapter_data->proc_flags, old_flags & ~flag);
return (old_flags & flag) != 0;
default:
break;
}
#endif
return 0;
} |
The new design looks good to me. Let's see if @lgirdwood and @wjablon1 have comments |
In DP domain,
sof_dax_processruns in a DP thread, othersof_dax_*functions run in a different LL thread. Using atomic flags instead of originaldax_ctx->update_flagsto avoid potential race conditions when updating flags.Since
dax_inf.hshould not have any dependencies on SOF, we define the new atomic flags indax.c