New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ipc_service: static_vrings: Move to one WQ per instance #43806
ipc_service: static_vrings: Move to one WQ per instance #43806
Conversation
11aa16f
to
54312af
Compare
This fixes an issue where we dropped the send buffer when ipc_service_send() timed out waiting for a shared memory buffer. We now wait in the IPC RX endpoint, but this shouldn't be a problem for other IPC users, as PR zephyrproject-rtos#43806 introduces separate workqueues per instance, with configurable priorities. This also includes small changes regarding the IPC service return value. Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
@hubertmis @arnopo @mbolivar-nordic looking at #44016 the problem with this PR is that we cannot set in DT negative values, so we cannot set a priority higher than So either we find a different way to define the priority value or we just go with the solution in #44016 and close this one down. @mbolivar-nordic am I correct that we cannot set negative values in the DT? |
I don't like the #44016 solution. It is acceptable as a temporary workaround, but we need to have a way for application developers to define threads with higher priority than VirtQueue notification work. We cannot assume that this workqueue has the highest priority in all the systems using Zephyr. |
ok, one possible solution is to have something like (for example):
|
However, the temporary fix is probably needed at the moment to fix problems before nrfconnect/sdk-nrf#7050 is merged. It is clearly a workaround and the real solution should come with this PR, which should overwrite the changes introduced by #44016 |
The
|
Yeah, this is what I suggested but using some defines instead of |
Oops sorry I read your common too fast and I misunderstood it... |
@@ -28,3 +28,8 @@ properties: | |||
mbox-names: | |||
description: MBOX channel names (must be called "tx" and "rx") | |||
required: true | |||
|
|||
priority: | |||
description: WQ priority for the instance |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Am I missing a wider context where this is documented? If not, this really should get some more description. What are the min/max values? Are higher values higher priority, or are lower values higher priority? etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you are right, I'll extend the description.
This priority value is translated into the WQ priority thread managing the incoming data on the IPC instance. So it really is a priority value for the thread as usually defined in Zephyr, so like this https://docs.zephyrproject.org/latest/reference/kernel/threads/index.html#thread-priorities
The problem is that (AFAICT) we cannot use negative values here when we want to assign to the WQ an high (cooperative) priority so the idea is to resort to something more fancy like the one proposed in #43806 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, thanks for explaining!
If this is a zephyr priority, then it's not really hardware configuration and so ordinarily I would say it shouldn't go into devicetree. But I understand the problem here: you don't really have anywhere else to configure it.
Can we compromise and call this zephyr,wq-priority
to make it super obvious what's going on? I know the compatible is already zephyr-specific, but I'm hoping you get why this is a bit unusual for DT.
Anyway, as far as negative numbers go, you can use negative numbers in devicetree with some restrictions. Please read https://elinux.org/Device_Tree_Mysteries#Signed_Property_Values for details. Do you think the 2's complement restriction is still workable for your use case? If so, it might be better to do it that way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we compromise and call this zephyr,wq-priority to make it super obvious what's going on? I know the compatible is already zephyr-specific, but I'm hoping you get why this is a bit unusual for DT.
If you think this is a dumb idea, go ahead and say so; I won't be offended.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not at all. Actually that makes even more sense because the priority is really referring to the thread priority that is indeed something zephyr specific.
54312af
to
f723862
Compare
This fixes an issue where we dropped the send buffer when ipc_service_send() timed out waiting for a shared memory buffer. We now wait in the IPC RX endpoint, but this shouldn't be a problem for other IPC users, as PR #43806 introduces separate workqueues per instance, with configurable priorities. This also includes small changes regarding the IPC service return value. Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
f723862
to
2c3362b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
2c3362b
to
f5d913f
Compare
f5d913f
to
1eed291
Compare
Instead of having one single WQ per backend, move to one WQ per instance instead and add a new "zephyr,priority" property in the DT to set the WQ priority of the instance. Fix the sample as well. Signed-off-by: Carlo Caione <ccaione@baylibre.com>
1eed291
to
e1f38b4
Compare
@carlescufi please re-approve |
This fixes an issue where we dropped the send buffer when ipc_service_send() timed out waiting for a shared memory buffer. We now wait in the IPC RX endpoint, but this shouldn't be a problem for other IPC users, as PR zephyrproject-rtos#43806 introduces separate workqueues per instance, with configurable priorities. This also includes small changes regarding the IPC service return value. Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
This fixes an issue where we dropped the send buffer when ipc_service_send() timed out waiting for a shared memory buffer. We now wait in the IPC RX endpoint, but this shouldn't be a problem for other IPC users, as PR zephyrproject-rtos#43806 introduces separate workqueues per instance, with configurable priorities. This also includes small changes regarding the IPC service return value. Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
Instead of having one single WQ per backend, move to one WQ per
instance instead and add a new "priority" property in the DT to set the
WQ priority of the instance. Fix the sample as well.
Signed-off-by: Carlo Caione ccaione@baylibre.com