h1
Async overloading
-
2021-08-24
If the stdlib were to expose APIs for async IO, how should it be namespaced?
// Like this?
use std::async_io::remove_file;
// Like this?
use std::io::async_remove_file;
// Or like this?
use std::io::remove_file;
This post looks at async overloading in Swift, how we could potentially translate it to Rust, and makes the case that we should make investigating its feasibility a priority for the Async Foundations WG.
Choose your calling convention
When designing an API in Rust which performs IO, you have to make a decision whether you want it to be synchronous, asynchronous, or both.
If you choose “synchronous” it’s likely easier to write, iterate on, and use. But it may be less performant, and interacting with async APIs can be tricky.
Choosing “asynchronous” will often mean better performance (especially on multi-core machines), better compatibility with a lot of other networking operations. But it is often significantly harder to author and maintain.
If you author both so you can let end-users decide what they want to
consume, not only will you incur the cost of writing the same code twice. You
also need to choose how to expose both APIs to end-users. This leads to
situations like with the mongodb
crate which exposes a
feature-flagged sync
submodule which exposes a copy of all IO types for use in synchronous code.
There’s a clear “primary API” (async), and “secondary API” (sync). From a maintainer’s perspective this means that whenever a new async API is introduced, you need to manually ensure that the sync counterpart is used as well. And from a user’s perspective you need to make sure you don’t accidentally call the wrong API since both live in the same namespace.
Another example are the postgres
and tokio-postgres
crates which split the sync/async APIs out into two different crates instead.
This solves the namespace problem, but still requires manually ensuring both
APIs are updated. And end-users still need to remember to import the right
crate for their purpose.
The current state of crates isn’t unworkable: we know Rust is used in production, often to great success. But I don’t think it’s hard to argue that it isn’t particularly great either. The fact that we need feature flags, submodules, or even separate crates is a direct artifact of the limitations of Rust as a language required to create good designs, and not an issue with any of the ecosystem crates.
The standard library
There’s no arguing that Async Rust is hot right now. Companies both large and small have made commitments to using Rust, and Async in particular is integral to Rust’s adoption. The Async WG (which I’m part of) is looking at integrating parts of the ecosystem into the stdlib; including core traits, but also networking abstractions other IO facilities. And unfortunately the logical endpoint of our current trajectory would be a significant increase in API surface of the stdlib.
As with all things Rust: nothing is certain until it’s stabilized. But “async compat” has been a recurring theme, and the Async WG is looking at ways to include runtime-independent async APIs in the stdlib. Which if we were to add an async counterpart for every IO API currently in the stdlib, we’d end up exposing over 300 new types — not accounting for methods.
To get a sense of what this would feel like we can look at other language runtimes. Python, C#, and Node.js all have this sync/async split. I’m most familiar with Node.js which has 3 different calling conventions (callbacks, async/await, blocking). Which in practice looks like this:
// Delete a file with the default async callback-based API.
import { unlink } from 'fs';
unlink('./tmp/foo.txt', (err) => {
if (err) throw err;
console.log('success');
});
// Delete a file with the recommended async Promise-based API.
import { unlink } from 'fs/promises';
await unlink('./foo.txt');
console.log('success');
// Delete a file with the sync blocking API.
import { unlinkSync } from 'fs';
unlinkSync('./foo.txt');
console.log('success');
Rust’s counterpart to Node’s fs.unlink
is
std::fs::remove_file
. Say we wanted to expose an async counterpart to it as part of the stdlib. We’d have a few options:
-
We could create a new submodule (e.g.
std::async_fs
) to contain a full counterpart ofstd::fs
, includingstd::async_fs::remove_file
. This approach is similar to how Python supports bothimport io
andimport asyncio
. -
We could add async types directly under
std::fs
. So we’d have bothstd::fs::remove_file
andstd::fs::async_remove_file
. This approach is similar to the way Node.js’s exposes sync APIs. But having to prefix each call withasync_*
and suffix with.await
doesn’t make this an appealing option for Rust. Which is likely why I haven’t heard anyone who prefers this approach over submodules. Even if prefixing submodules could lead to gems such asstd::async_sync
1.
I don’t believe the libs team would seriously let async_sync
be a
part of the stdlib. I’ve heard talk about deprecating std::sync
and splitting
it into separate submodules such as std::lock
and std::atomic
. Which means
we’d likely sooner see a std::async_lock
than std::async_sync
. But I still
find it amusing to think about an alternate future where we’d have sync
,
async_sync
, async
, Async
, Sink
, AsyncSink
and Sync
in the stdlib.
But both options share a core issue: APIs still need to be manually matched to
the context they’re called in. If you’re in a sync context,
std::async_fs::remove_file
cannot be called. You need to create an async
context first.
And likewise if you’re in an async context: std::fs::remove_file
can be
used, but it’s most likely not what you want: you probably want the async
version instead. This means that depending on the type of context you’re in,
half the IO APIs in the stdlib would be of no use to you. Which leaves the
question: Is there a way we could prevent manually needing to match APIs in
the stdlib to their right calling convention?
Async overloading in Swift
In an amendment to SE-0296: Async Await made last month, Swift has added support for async overloading which solves this exact problem! It enables sync and async functions of the same name to be declared in the same scope, leaving it up to the caller’s context to determine which variant is the right one:
// Existing synchronous API
func doSomethingElse() { ... }
// New and enhanced asynchronous API
func doSomethingElse() async { ... }
Calling async functions from sync functions triggers an error like this:
func f() {
// Compiler error: `await` is only allowed inside `async` functions
await doSomething()
}
But if you’re trying to call the sync variant of an overloaded function inside an async function, you’d see the following error:
func f() async {
// Compiler error: Expression is 'async' but is not marked with 'await'
doSomething()
}
However it’s still possible to call the synchronous version of the API by creating a synchronous context inside the async block:
func f() async {
let f2 = {
// In a synchronous context, the non-async overload is preferred:
doSomething()
}
f2()
}
But async overloading is not required. It’s still possible to declare async functions without overloading: they’ll just only work in async contexts. And similarly it’s possible to declare sync functions which will work in all contexts.
Unlike other languages, Swift will not need to effectively double the API surface of their stdlib to support their newly added async functionality. From a user’s perspective the same API will “work as expected” in both sync and async contexts. And that is something Rust can learn from.
Bringing async overloading to Rust
Now that you know how Swift has solved this issue, it asks the question: Could we integrate a system like this into Rust. The fact that I’m talking about this at all probably gives away that I think we can. But what would that look like?
For async functions, overloading could be easy enough. Adapting the Swift example we saw earlier, the following should be able to work:
// Existing synchronous API
fn do_something_else() { ... }
// New and enhanced asynchronous API
async fn do_something_else() { ... }
We could even give a specialization spin to the API, and make the syntax something like:
default fn do_something_else() { ... }
async fn do_something_else() { ... }
But I don’t know what the right syntax for this would be. Maybe it would warrant a different approach altogether! Today, if we try awaiting an async function inside a synchronous closure, we’re met with the following diagnostic:
error[E0728]: `await` is only allowed inside `async` functions and blocks
--> src/lib.rs:4:5
|
3 | fn f() {
| - this is not `async`
4 | do_something().await;
| ^^^^^^^^^^^^^^^^^^^^ only allowed inside `async` functions and blocks
With the overload added, we can start suggesting fixes for errors like this too 2. For example:
As an aside; I’m super excited for the upcoming diagnostics changes. The diff view is looking really good! It’s just a bummer my blog theme doesn’t allow me to custom-color diagnostics output yet. For now imagine the diagnostics in my blog have pretty colors like the compiler does, heh.
|
help: try removing `.await` from the function call
|
3 | async fn f() {
4 - do_something().await;
4 + do_something();
|
And calling the overloaded sync function from an async context could produce a diagnostic like this:
warning: unused implementer of `Future` that must be used
--> src/lib.rs:4:5
|
3 | async fn f() {
4 | do_something();
| ^^^^^^^^^^^^^^^
|
= note: this function is called in an async context and uses the async overload
= note: `#[warn(unused_must_use)]` on by default
= note: futures do nothing unless you `.await` or poll them
|
help: try adding `.await` to the function call
|
3 | async fn f() {
4 | do_something().await;
| ------
Just like in Swift, using the synchronous overload in an async context could be done by writing:
async fn f() {
let f2 = || {
doSomething()
};
f2()
}
For traits we could imagine something similar as for functions. Supposing the
default
keyword (specialization) could also be used for overloads; we could
imagine we could write something like this for an async version of
Read
:
/// The synchronous `Read` trait
pub default trait Read {
fn read(&mut self, buf: &mut [u8]) -> Result<usize>;
}
/// The asynchronous `Read` trait
pub async trait Read {
async fn read(&mut self, buf: &mut [u8]) -> Result<usize>;
}
Depending on how the traits are implemented, users could end up implementing none, either, or both. Which variants of the trait are implemented could then be highlighted through the docs, for example:
---------------------
Trait Implementations
---------------------
Debug
FromRawFd
Read
Read (async)
Write
Write (async)
This really is all a rough sketch. I don’t know whether using default
as a
keyword would make sense here at all, or what the right way of highlighting this
in the docs are. What I hope to achieve with this is to share a glimpse of how
this could work end-to-end for Rust users. Docs, diagnostics, writing, and
consuming code are all different ways to experience the same feature. And I hope
this provides an idea of what an integration could possibly look like for both
library consumers and authors alike.
Potential caveats
Backwards incompatibility
As far as I can tell Rust doesn’t have anything yet that would preclude us from going down this path. Indeed adding this would cause a delay in “shipping” async Rust. But it would make it so we don’t need to effectively double the surface area of the stdlib, which to me is worth it (this is coming from someone who’s helped author an async copy of virtually every stdlib API).
If we decide this is worth exploring, the one thing we shouldn’t do is stabilize
std::stream
before our explorations are over. I don’t believe any other APIs
are included yet in nightly which could pose an issue for async overloading, so
as long as we don’t stabilize that while we figure this out, we should be good!
3.
And again, this is coming from someone who helped author the Stream
RFC, and added Stream
to nightly. Please believe me that if someone wants to
stabilize async iterators, it’s me. But I think async overloading is the kind of
change we should consider and definitively rule out before proceeding we attempt
to stabilize the Stream
API as-is.
Overloading existing stdlib functions
One issue to be aware of is that unlike Swift we cannot immediately fail if a
synchronous overload is selected in an async function. Rust’s async models
allows for delayed .await
ing, which means we cannot error at the call-site.
Instead we’ll likely need to hook into the machinery that enables #[must_use]
;
allowing us to validate whether the returned future is actually awaited — and
warn if it’s not. Even though this is slightly different from Swift appears to
do things, it should not present an insurmountable hurdle.
Feasibility of async parity
One of the goals of the async-std project was to gauge the feasibility of creating a dual of the stdlib’s IO APIs using async Rust. And two years in we can conclude it’s been an overwhelming success: the only stdlib APIs we haven’t been able to faithfully reimplement in async Rust are those which require async closures and async traits 4. But now that the Async Foundations WG is adding those capabilities to the language, there are no blockers remaining for API parity between sync and async Rust.
An example of an API which requires async traits is
async-std
’s
Stream::collect
.
We’ve managed to work around it, but had to box some items. An example of an API
which requires async closures is
Stream::filter
.
The callback provided to the method is not async right now because there’s no
way to express the lifetime of the borrowed item. But once we have async
closures this will become possible to express. Pretty much all differences
between std and async-std are minor issues like these, which we’re fairly
certain can be solved once Async Rust gains more capabilities.
Past async overloading: code reuse
Finally, there’s an interesting question to ask whether it’s possible to reuse
code past an async overload. You could imagine a sync and an async parser
containing identical logic, except for the internal use of .await
. To which
degree does Swift’s model limit code reuse? That should be something the async
WG should answer once it decides async overloading is indeed desireable, and
starts looking at ways it could be implemented.
Returning impl Future
instead of async fn
Without
TAITs async fn
s cannot be named, and thus for the stdlib it’ll be desireable to return
manual futures from APIs instead. This means that we might want to write
something like this:
default fn do_something_else() { ... }
fn do_something_else() -> DoSomethingElse { ... }
struct DoSomethingElse;
impl Future for DoSomethingElse { ... }
Luckily the compiler is aware of which traits are implemented on types, and
could validate that this indeed resolves to a valid async overload.
We should probably even make it so -> impl Future<Output = io::Result<()>>
works, but -> io::Result<impl Future<Output = ()>>
does not. This would likely
be quicker to validate, and keep things feeling like “async fn overloading” only.
Overloads for other effects
A question which has come up is whether this mechanism could be used for other
effects as well. For example, it could be conceivable that in a try fn
we
could expect to call a fallible API, while in a non-try fn
we’d expect the
infallible method:
pub default fn reserve(&mut self, additional: usize);
pub try fn reserve(&mut self, additional: usize) -> throws ReserveError;
In my Fallible Iterator Adapter post I pointed
out that we’re missing a fair number of fallible variants of existing APIs.
Boats has since authored The Problem of Effects in
Rust which covers the
problem from a language perspective, and shows how if we introduced a different
kind of closure we could eliminate the need for try_
iterator adapters.
RFC 2116: Fallible Collection
Allocation covers adding try_
variants of the base building blocks. But the implementation has decided not to add
try_
variants for all allocation APIs since that would increase the API
surface too much. Which is a similar reasoning given for why we don’t have
fallible variants for all iterator APIs either.
The issues we’re covering in this post, Boats’ effect post, and the issues in RFC 2116 all seem related to each other. I don’t know if “fallible overloading” could provide a partial way out of it, but maybe there’s something to it? The possibilities different kinds of overloading could bring certainly are interesting, and probably worth thinking about more.
Other
There are two topics which I haven’t thought about, but should be thought about by the working group:
- How does async overloading interact with
dyn Trait
? - How does async overloading interact with FFI?
Basing on what I’ve seen come up in conversations there might be ways to make this work, but it should be looked at more closely as part of a feasibility probe.
Conclusion
Currently the async foundations roadmap does include a note on “async overloading”, but it’s marked as “slightly past the horizon”. I think given this has implications for async iteration, async IO, and every other async libs API, we should make sure we’ve properly investigated async overloading before we make any changes to the stdlib which would be incompatible with it.
To summarize what we’ve covered so far:
- Async Rust is currently on a trajectory to doubling the API surface of IO APIs in the stdlib.
- Async overloading could provide a way out for us, enabling us to add async IO APIs to the stdlib while keeping the number of IO APIs in the stdlib to its current number.
- As long as we don’t stabilize
Stream
, async IO traits, or any other async IO types we should be capable of introducing async overloading without any sharp edges. - Because async overloading informs the implementation of all of the async types in the stdlib, investigating its feasiblity should be a priority for the async WG.
My hopes of this post is that it convinces folks that async overloading is an idea worth taking seriously before we continue with any other designs. And to perform a feasibility study whether this could actually work from a language, libs, and compiler perspective.
Thanks to: Eric Holk, Irina Shestak, James Halliday, Scott McMurray, and Wesley Wiser for helping review this post prior to publishing.