-
Notifications
You must be signed in to change notification settings - Fork 315
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Editorial: Queue a task to resolve/reject promise or when fire an event. #1755
base: main
Are you sure you want to change the base?
Conversation
In #1744, I was asked to introduce the parallel queue. I am revisiting it to see if the case also need to be applied here or not. |
I started to wonder whether we need [=in parallel=] if almost all sub steps under [=in parallel=] is enqueued. I am not so familiar with how specs are written to realize parallel jobs. |
This diff looks locally reasonable to me. However, I am not caught up on the discussions about parallel queues vs. using the generic "in parallel", so I cannot say which is is best here. Maybe merging this first is good and then doing some overall work on using parallel queues later is the right approach. To try to answer the more generic questions: Ideally, specifications are written to match implementations:
How does this apply to the service worker spec, and to this PR? Well, I think the main thing is to identify from which thread you intend "service worker registration" objects to be accessed. I get the impression it should be in parallel, not on an event loop thread. So that's why it's worth keeping something parallel-ish in Note that in this PR there is some thread crossing, where registration is implicitly passed from "in parallel" (where it was created in step 8.1) to the event loop thread (in step 8.2.1 and 8.2.2). This is a bit sketchy, but mostly OK. In practice, what's probably happening here is a copy and snapshot: when "get the service worker registration object" consults registration's installing worker in step 2.6, it's probably not doing some sort of cross-thread lookup at the actual registration object, but instead looking at some key that was passed across threads in A more fully rigorous version of this whole setup would have explicitly-obvious primitive keys (like string UUIDs, or something) that allow us to correspond between browser-process "service worker"/"service worker registration" objects and event loop If I were to suggest the most useful improvements to improving the spec's cross-thread hygiene, here would be my priority list:
I think it is totally fine if only item 1 on this list gets done. Items like 2-4 can just be things you keep in mind when you write new spec text. |
1. Resolve |promise| with undefined. | ||
1. [=Queue a task=] on |promise|'s [=relevant settings object=]'s [=responsible event loop=], using the [=DOM manipulation task source=], to run the following steps: | ||
1. If |registration|'s [=active worker=] is null, [=reject=] |promise| with an "{{InvalidStateError}}" {{DOMException}}, and abort these steps. | ||
1. Set |registration|'s [=navigation preload enabled flag=]. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is for protecting [=navigation preload enabled flag=] to be updated in parallel, I guess we also need the code to read [=navigation preload enabled flag=] in Handle Fetch to be executed with queue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In Handle Fetch
algorithm I see [=navigation preload enabled flag=] is being used to check and also don't see they are running the steps are in [=in parallel=]
docs/index.bs
Outdated
1. If |client|'s [=environment/execution ready flag=] is set, then invoke [=Resolve Get Client Promise=] with |client| and |promise|, and abort these steps. | ||
1. Resolve |promise| with undefined. | ||
1. If |client|'s [=environment/execution ready flag=] is set, [=queue a task=] on |promise|'s [=relevant settings object=]'s [=responsible event loop=], using the [=DOM manipulation task source=], to invoke [=Resolve Get Client Promise=] with |client| and |promise|, and abort these steps. | ||
1. [=Queue a task=] to resolve |promise| with undefined. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not queue in Line 1391, and run invocation and resolve in its substeps?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, is it guaranteed that client state wont change during the parallel execution, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For Client API, it might affect not only service worker but also clients. Do we need a parallel queue that covers both instead of the event loop for ServiceWorker itself?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated the queue to resolve in substeps.
@yoshisatoyanagisawa Can you please elaborate on your third comment in this thread please
For Client API, it might affect not only service worker but also clients. Do we need a parallel queue that covers both instead of the event loop for ServiceWorker itself?
@@ -2261,10 +2267,11 @@ spec: storage; urlPrefix: https://storage.spec.whatwg.org/ | |||
The <dfn method for="CacheStorage"><code>match(|request|, |options|)</code></dfn> method steps are: | |||
|
|||
1. If |options|["{{MultiCacheQueryOptions/cacheName}}"] [=map/exists=], then: | |||
1. Return [=a new promise=] |promise| and run the following substeps [=in parallel=]: | |||
1. Let |promise| be [=a new promise=]. | |||
1. Run the following substeps [=in parallel=]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Considering the possible modification of the map, I guess we want to execute followings inside a parallel queue.
It might be good to have a dedicated parallel queue for the Cache API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please also elaborate your suggestion
This is for addressing #1740.
Preview | Diff