Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ipc_service: static_vrings: Move to one WQ per instance #43806

Merged
merged 1 commit into from Mar 28, 2022

Conversation

carlocaione
Copy link
Collaborator

Instead of having one single WQ per backend, move to one WQ per
instance instead and add a new "priority" property in the DT to set the
WQ priority of the instance. Fix the sample as well.

Signed-off-by: Carlo Caione ccaione@baylibre.com

hubertmis
hubertmis previously approved these changes Mar 15, 2022
arnopo
arnopo previously approved these changes Mar 16, 2022
@zephyrbot zephyrbot added the area: IPC Inter-Process Communication label Mar 16, 2022
jori-nordic added a commit to jori-nordic/zephyr that referenced this pull request Mar 21, 2022
This fixes an issue where we dropped the send buffer when
ipc_service_send() timed out waiting for a shared memory buffer.

We now wait in the IPC RX endpoint, but this shouldn't be a problem for
other IPC users, as PR zephyrproject-rtos#43806 introduces separate workqueues
per instance, with configurable priorities.

This also includes small changes regarding the IPC service return value.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
@carlocaione
Copy link
Collaborator Author

carlocaione commented Mar 21, 2022

@hubertmis @arnopo @mbolivar-nordic looking at #44016 the problem with this PR is that we cannot set in DT negative values, so we cannot set a priority higher than 0 (we cannot use cooperative threads then).

So either we find a different way to define the priority value or we just go with the solution in #44016 and close this one down.

@mbolivar-nordic am I correct that we cannot set negative values in the DT?

@hubertmis
Copy link
Member

I don't like the #44016 solution. It is acceptable as a temporary workaround, but we need to have a way for application developers to define threads with higher priority than VirtQueue notification work. We cannot assume that this workqueue has the highest priority in all the systems using Zephyr.

@carlocaione
Copy link
Collaborator Author

carlocaione commented Mar 21, 2022

ok, one possible solution is to have something like (for example):

priority = <1 PRIO_PREEMPT>; # == 1
...
priority = <2 PRIO_COOP>; # == -2

@ahasztag
Copy link
Contributor

I don't like the #44016 solution. It is acceptable as a temporary workaround, but we need to have a way for application developers to define threads with higher priority than VirtQueue notification work. We cannot assume that this workqueue has the highest priority in all the systems using Zephyr.

However, the temporary fix is probably needed at the moment to fix problems before nrfconnect/sdk-nrf#7050 is merged. It is clearly a workaround and the real solution should come with this PR, which should overwrite the changes introduced by #44016

@arnopo
Copy link
Collaborator

arnopo commented Mar 21, 2022

ok, one possible solution is to have something like (for example):

priority = <1 PRIO_PREEMPT>; # == 1
...
priority = <2 PRIO_COOP>; # == -2

The priority DT field is not generic, right?
what about adding a second parameter?

priority = <1 1>;

  • first parameter is the prio type :

    • 0: preempt
    • 1: coop
  • Second parameter is the priority level

@carlocaione
Copy link
Collaborator Author

ok, one possible solution is to have something like (for example):

priority = <1 PRIO_PREEMPT>; # == 1
...
priority = <2 PRIO_COOP>; # == -2

The priority DT field is not generic, right? what about adding a second parameter?

priority = <1 1>;

  • first parameter is the prio type :

    • 0: preempt
    • 1: coop
  • Second parameter is the priority level

Yeah, this is what I suggested but using some defines instead of 0 and 1. I'll try to prepare a patch.

@arnopo
Copy link
Collaborator

arnopo commented Mar 21, 2022

ok, one possible solution is to have something like (for example):

priority = <1 PRIO_PREEMPT>; # == 1
...
priority = <2 PRIO_COOP>; # == -2

The priority DT field is not generic, right? what about adding a second parameter?
priority = <1 1>;

  • first parameter is the prio type :

    • 0: preempt
    • 1: coop
  • Second parameter is the priority level

Yeah, this is what I suggested but using some defines instead of 0 and 1. I'll try to prepare a patch.

Oops sorry I read your common too fast and I misunderstood it...

@@ -28,3 +28,8 @@ properties:
mbox-names:
description: MBOX channel names (must be called "tx" and "rx")
required: true

priority:
description: WQ priority for the instance
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Am I missing a wider context where this is documented? If not, this really should get some more description. What are the min/max values? Are higher values higher priority, or are lower values higher priority? etc.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you are right, I'll extend the description.

This priority value is translated into the WQ priority thread managing the incoming data on the IPC instance. So it really is a priority value for the thread as usually defined in Zephyr, so like this https://docs.zephyrproject.org/latest/reference/kernel/threads/index.html#thread-priorities

The problem is that (AFAICT) we cannot use negative values here when we want to assign to the WQ an high (cooperative) priority so the idea is to resort to something more fancy like the one proposed in #43806 (comment)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, thanks for explaining!

If this is a zephyr priority, then it's not really hardware configuration and so ordinarily I would say it shouldn't go into devicetree. But I understand the problem here: you don't really have anywhere else to configure it.

Can we compromise and call this zephyr,wq-priority to make it super obvious what's going on? I know the compatible is already zephyr-specific, but I'm hoping you get why this is a bit unusual for DT.

Anyway, as far as negative numbers go, you can use negative numbers in devicetree with some restrictions. Please read https://elinux.org/Device_Tree_Mysteries#Signed_Property_Values for details. Do you think the 2's complement restriction is still workable for your use case? If so, it might be better to do it that way.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we compromise and call this zephyr,wq-priority to make it super obvious what's going on? I know the compatible is already zephyr-specific, but I'm hoping you get why this is a bit unusual for DT.

If you think this is a dumb idea, go ahead and say so; I won't be offended.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not at all. Actually that makes even more sense because the priority is really referring to the thread priority that is indeed something zephyr specific.

@carlocaione carlocaione dismissed stale reviews from arnopo and hubertmis via f723862 March 23, 2022 10:15
@github-actions github-actions bot added the area: API Changes to public APIs label Mar 23, 2022
carlescufi pushed a commit that referenced this pull request Mar 23, 2022
This fixes an issue where we dropped the send buffer when
ipc_service_send() timed out waiting for a shared memory buffer.

We now wait in the IPC RX endpoint, but this shouldn't be a problem for
other IPC users, as PR #43806 introduces separate workqueues
per instance, with configurable priorities.

This also includes small changes regarding the IPC service return value.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
hubertmis
hubertmis previously approved these changes Mar 24, 2022
Copy link
Member

@hubertmis hubertmis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

carlescufi
carlescufi previously approved these changes Mar 24, 2022
Instead of having one single WQ per backend, move to one WQ per
instance instead and add a new "zephyr,priority" property in the DT to
set the WQ priority of the instance. Fix the sample as well.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
@mbolivar-nordic
Copy link
Contributor

@carlescufi please re-approve

@carlescufi carlescufi merged commit 92d8329 into zephyrproject-rtos:main Mar 28, 2022
@carlocaione carlocaione deleted the ipc_service_wq branch March 28, 2022 11:08
jori-nordic added a commit to jori-nordic/zephyr that referenced this pull request Mar 29, 2022
This fixes an issue where we dropped the send buffer when
ipc_service_send() timed out waiting for a shared memory buffer.

We now wait in the IPC RX endpoint, but this shouldn't be a problem for
other IPC users, as PR zephyrproject-rtos#43806 introduces separate workqueues
per instance, with configurable priorities.

This also includes small changes regarding the IPC service return value.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
jori-nordic added a commit to jori-nordic/zephyr that referenced this pull request Apr 20, 2022
This fixes an issue where we dropped the send buffer when
ipc_service_send() timed out waiting for a shared memory buffer.

We now wait in the IPC RX endpoint, but this shouldn't be a problem for
other IPC users, as PR zephyrproject-rtos#43806 introduces separate workqueues
per instance, with configurable priorities.

This also includes small changes regarding the IPC service return value.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: API Changes to public APIs area: Devicetree Binding PR modifies or adds a Device Tree binding area: Devicetree area: IPC Inter-Process Communication area: Samples Samples
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants