Skip to content

Lazy evaluation#582

Open
bgrant wants to merge 39 commits intomasterfrom
feature/lazy-evaluation
Open

Lazy evaluation#582
bgrant wants to merge 39 commits intomasterfrom
feature/lazy-evaluation

Conversation

@bgrant
Copy link
Copy Markdown
Contributor

@bgrant bgrant commented Aug 25, 2014

Based on #580.

For example:

        with context.lazy_eval():
            a = context.zeros((52, 62))
            b = context.ones((52, 62))
            c = context.ones((52, 62)) + 1
            d = (2*a + (3*b + 4*c)) / 2
            e = globalapi.negative(d * d)

Our implementation

On the client, when <context>.lazy == True:

  • Sends are intercepted and queued (in <context>._sendq)
  • Recvs return immediately, queueing a lazy placeholder object inside a returned proxy object
  • When <context>.sync() is called, the both queues are sent to the engines (one queue per engine actually), and a real recv is blocked upon.

On the engines, when a 'process_message_queue' message is received (containing the queues):

  • Each message in the sendq is processed, one at a time, and the placeholder values from the recvq are used to feed values forward into the engine-side computation
  • Client-sends (return values) are queued, and sent as one message once the queue is processed.

On the client:

  • The client iterates through this return queue, and the lazy placeholders inside the originally reserved proxies are replaced by the real return values.

A note on my names

There's a Context attribute called lazy, and a Context method called lazy_eval() that is a context manager (decorated by contextlib.contextmanager) that sets and unsets lazy under the hood. If you have better ideas for those names, let me know.

Benchmark

I also added a simple benchmark in examples/lazy_eval. I time a loop that computes tanh on a DistArray for a settable number of iterations, both in lazy mode or in the default eager mode. Lazy mode seems to beat eager, but it's not as dramatic as I would have expected. I should probably think through the benchmark more.

Loading
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants