Open
Conversation
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Based on #580.
For example:
Our implementation
On the client, when
<context>.lazy == True:<context>._sendq)<context>.sync()is called, the both queues are sent to the engines (one queue per engine actually), and a realrecvis blocked upon.On the engines, when a
'process_message_queue'message is received (containing the queues):sendqis processed, one at a time, and the placeholder values from therecvqare used to feed values forward into the engine-side computationOn the client:
A note on my names
There's a
Contextattribute calledlazy, and aContextmethod calledlazy_eval()that is a context manager (decorated bycontextlib.contextmanager) that sets and unsetslazyunder the hood. If you have better ideas for those names, let me know.Benchmark
I also added a simple benchmark in
examples/lazy_eval. I time a loop that computestanhon a DistArray for a settable number of iterations, both in lazy mode or in the default eager mode. Lazy mode seems to beat eager, but it's not as dramatic as I would have expected. I should probably think through the benchmark more.