h1
Future::join and const-eval
-
2021-01-16
Happy new year everyone! Today we’re trying something new: less of a blog
post, more of research notes. This is less of a “here’s something I’ve
concluded”, and more of: “here’s something I’m thinking about”. Today’s topic
is: “How can we add Future::{try_}join
and {try_}join!
to the stdlib in a
way that feels consistent?”
What does joining Futures do?
A Future in Rust is best though of as a “value which eventually becomes
available”. It’s not specified when a value becomes available, so using
.await
allows us to wait for it until it’s available.
Sometimes we want to wait on more than one future at the time: after all,
when we’re waiting on things, we can do other things in the mean time. And
one way to do this is by calling join
.
async-std
exposes a Future::join
method, and async-macros
exposes a
join!
macro. An example joining two futures:
let a = future::ready(1u8);
let b = future::ready(2u8);
assert_eq!(join!(a, b).await, (1, 2));
let a = future::ready(1u8);
let b = future::ready(2u8);
assert_eq!(a.join(b).await, (1, 2));
However once we start joining more than two futures, the output types become different:
let a = future::ready(1u8);
let b = future::ready(2u8);
let c = future::ready(3u8);
assert_eq!(join!(a, b, c).await, (1, 2, 3));
let a = future::ready(1u8);
let b = future::ready(2u8);
let c = future::ready(3u8);
assert_eq!(a.join(b).join(c).await, (1, (2, 3)));
As you can see, each invocation of Future::join
returns a tuple. But that
means that chaining calls to it starts to nest tuples, which becomes hard to
use. And it becomes more nested the more times you chain. Oh no!
In contrast, the join!
macro dynamically grows the number of items returned
in the tuple. This is possible because macros have loops, and can just write
code – so inside the macro we just expand the output to a tuple which is
large enough to hold all of the outputs.
What does join!
return?
The definition of async_macros::join!
is fairly brief, so I’ll just share
it right here. The only detail missing is the defition of the MaybeDone
type: it’s a wrapper which can be awaited, and stores the output type of the
future once it completes. We wait for all instances of MaybeDone
to
complete, and at the end we take all their values and return it from the
future:
#[macro_export]
macro_rules! join {
($($fut:ident),* $(,)?) => { {
async {
$(
// Move future into a local so that it is pinned in one place and
// is no longer accessible by the end user.
let mut $fut = $crate::MaybeDone::new($fut);
)*
$crate::utils::poll_fn(move |cx| {
use $crate::utils::future::Future;
use $crate::utils::task::Poll;
use $crate::utils::pin::Pin;
let mut all_done = true;
$(
let fut = unsafe { Pin::new_unchecked(&mut $fut) };
all_done &= Future::poll(fut, cx).is_ready();
)*
if all_done {
Poll::Ready(($(
unsafe { Pin::new_unchecked(&mut $fut) }.take().unwrap(),
)*))
} else {
Poll::Pending
}
}).await
}
} }
}
As you can see the outer-most value returned is an async {}
block. This
isn’t a specific type, but can be referred to using impl Future
. However
the type returned by Future::join
is a concrete Join
future. This type
can be addressed by name, and actually passed around.
However as we chain Future::join
repeatedly, the resulting future’s
signature will look somewhat like: Join<Join<Join<Join<T>>>>
. This is not
great.
So on the one hand we have anonymous futures which can only be addressed
through impl Future
. And on the other hand we have deeply nested futures
which are a pain to write by hand. Can we do better?
consistent return types
One thing I mentioned at the start but didn’t dive in yet is the fact that
we’d like to align the return types of join!
and Future::join
. Even if
Future::join
would only ever take one other future as an argument, being
able to switch between the method and the macro without needing to change the
signature of the returned types is a a huge bonus.
After having worked on async-std
for the past two years where a lot of APIs
use async fn
, I’m now somewhat convinced that the stdlib should never do
this. Which you can see reflected in APIs such as std::future::ready
which
now returns the concrete future std::future::Ready
, whereas in async-std
it was just an async fn
.
Probably another point worth touching on is the futures-rs
implementation
of join!
. This doesn’t return any kind of future at all, wrapping the
.await
call within the macro instead. I feel somewhat strongly that
.await
calls shouldn’t be hidden in code, but instead always be visible.
// Example of futures_rs::join!
let a = future::ready(1u8);
let b = future::ready(2u8);
assert_eq!(join!(a, b), (1, 2)); // is this sync or async?
This slight digression into futures-rs
aside, I think if we were to add
future::Future::join
and future::join!
functions to the stdlib, both
should be returning concrete futures. And because they effectively do the
same thing, we should make it so they both return the same Join
type.
Maybe const can help?
So the question now becomes: “how can we do this?”. And I think the answer for this is: “const tuples may be able to help”.
So const tuples don’t exist in Rust today yet. Not even on nightly. The only way to create variadic tuples is through macros like we’ve shown. However from talking to members of the const-eval WG const tuples are definitely on the roadmap, though it may take a while. However now that we’re seeing a move to fund more people to work on the compiler, I’m hoping that this may be possible within a few years, which isn’t that long in the grand scheme of things.
Given there’s no proposal for const tuples, it’s hard to write an example since I have no clue what the syntax for it will be. For N-length arrays the syntax is the following:
pub fn array_windows<const N: usize>(&self) -> ArrayWindows<'_, T, N>;
The const N: usize
here is the length argument for the array of type T
.
The operations this function returns are on [T; N]
. However tuples don’t
have a consistent type T
; values contained within tuples are heterogenous.
So a tuple of length N
can contain N
different types. I have no clue how
this would be expressed in const
contexts (if at all possible?).
So for now let’s just pretend we can define N-length tuples inside function
signatures, and cross our fingers that this makes enough sense that the idea
comes across. Assuming something like that would work, I would expect
Future::Join
to be able to defined along these lines:
impl Future {
/// Join with one other future.
pub fn join<F: Future>(self, other: F) -> Join<(Self, F)>;
}
This signature tries to convey: this future holds at least two futures:
Self
, and another future we’re joining with. For the join!
macro we could
fill out the types using code generation, populating the values of the tuple
at compile time. Invoking it would yield the following return type:
let a = future::ready(1u8);
let b = future::ready(2u8);
let fut = join!(a, b);
// fut: Join<(Ready<Output = u8>, Ready<Output = u8>)>
let a = future::ready(1u8);
let b = future::ready(2u8);
let c = future::ready(3u8);
let fut = join!(a, b, c);
// fut: Join<(Ready<Output = u8>, Ready<Output = u8>, Ready<Output = u8>)>
Assuming once we have const tuples we’ll also have const panic, we can guard
against the case where zero futures are provided to Join
. Or perhaps the
signature should instead be:
Join<Self, (Other, Other2)>
The details are unclear, because well, I don’t know how this should work in
the future. Maybe there’s a different feature a play here too: what if
expressing this in signatures actually requires const-variadics
or
something. This may actually be relying on a variety of features I’m not
tracking.
What does that mean for adding futures concurrency to the stdlib?
This post is rooted in research I was doing exactly to answer that question.
join!
and Future::join
feel like they do exactly what they should, module
some issues around their return types. Unfortunately however it seems the
best solution would require a const-eval feature that doesn’t even have an
RFC yet.
Given I expect Rust to stick around for at least a few more decades, and how
core this functionality is for async programming. I think it’s actually worth
waiting to implement these features correctly, rather than rushing to add
them in the short term. Libraries such as async-std
and async-macros
can
provide suitable solutions through user space in the interim.
In addition to that, there’s one more feature in the language required before
we can consider adding Future::join
: namely, we need either
#[cfg(accessible)]
, or
#[cfg(version)]
. This is
currently a blocker to adding any method on the Future
trait. Since the
majority of the async ecosystem relies on Ext
traits to implement missing
functionality, adding a method of the same name to the stdlib would cause
ambiguity. So in order to prevent accidental ecosystem breakage, libraries
outside of std should gain the ability to detect whether a method has been
implemented in the stdlib. Which is what accessible
and version
are for.
However one possibility may be that we add join!
based on the async {}
block implementation in the near term without adding Future::join
as well.
That would at least allow us to expose that funcionality from the stdlib,
even if in some instances it may not be the most ergonomic.
Then later on, once we gain the ability to reason about tuples/variadics in
const contexts, we can switch the return type to be Join
, and add the
Future::join
method as well. That way we get a solution in the short term,
but still do the right thing in the long term. This depends on async {}
being forward compatible with returning a concrete future though. I’m not
sure if this has been done before, and the lang team might need to weigh in
on that.
edit (2020-01-17): As pointed out here
by matthieum, if we had a variadic Join
type, there’s no reason we couldn’t
implement join
on tuples directly.
// Tuple::join
let a = future::ready(1u8);
let b = future::ready(2u8);
let c = future::ready(3u8);
assert_eq!((a, b, c).await, (1, 2, 3));
// Array::join
let a = future::ready(1u8);
let b = future::ready(2u8);
let c = future::ready(3u8);
assert_eq!([a, b, c].await, [1, 2, 3]);
This would soft-deprecate the need to use future::join!
and could probably
also be extended to arrays and slices of futures too 1. The question is
whether the same Join
type could be used for all implementations, since it
wouldn’t return a tuple but an array or vec instead. This can probably only
be answered once designs for the corresponding language features start.
join!
on an array is effectively an instance of
join_all!
.
In my designs I’ve mostly relegated join_all!
as not being a primitive,
instead favoring designs such as TaskGroup
and ParallelStream
for a
collection of N futures since these more often than not will want to be run
on a executor anyway. However wanting to join N futures is still nice to be
able to do, and implementing Array::join
may very well provide a way for us
to do so.
Other considerations
Everything we’ve expressed here not only applies to future::Future::Join
and future::join!
. It applies to the try_join
, race
, and try_race
variants as well. The
async_std::future
docs explain how these types cover the full range of awaiting futures.
Additionally the future::join
variants only really work well when you know
ahead of time how many futures you’re going to be awaiting. If the number is
dynamic, other constructs should be used. In a future post I may talk about
TaskGroup
, an adaptation of crossbeam::scope
I’m working on, inspired by
Swift’s upcoming task
proposal. But other constructs like parallel-stream
and FuturesUnordered
already exist as well.
Conclusion
In this post we’ve looked at what it would take to add Future::join
and
join!
to the stdlib where both functions would return the same, named
future. One plausible way to achieve this would be through const tuples (and
possible const variadics, which may or may not be the same thing).
However it may be possible to add future::join!
in the near term, and once
Rust gains the appropriate language features add Future::join
and upgrade
join!
to use the same Join
future. This would enable adding the
functionality in the near term, but still achieving the ideal design later
on.
This post is a bit of an experiment: single draft, Saturday morning writing. I’ve been doing a lot of research into stabilizing async Rust paradigms recently, and figured I’d share some of the findings along the way. In part for my own reference. But also to communicate needs async Rust may have to members of other Rust teams.
Anyway, hope you enjoyed this, and hope you have a good weekend!
If you or your company like the work I’m doing, you’re welcome to support me through GitHub sponsors. Special thanks to my sponsors: hibbian, milesgranger, romatthe, No9, and several others who prefer to remain anonymous.