pub struct Builder { /* private fields */ }
Expand description
Builds Tokio Runtime with custom configuration values.
Methods can be chained in order to set the configuration values. The
Runtime is constructed by calling build
.
New instances of Builder
are obtained via Builder::new_multi_thread
or Builder::new_current_thread
.
See function level documentation for details on the various configuration settings.
Examples
use tokio::runtime::Builder;
fn main() {
// build runtime
let runtime = Builder::new_multi_thread()
.worker_threads(4)
.thread_name("my-custom-name")
.thread_stack_size(3 * 1024 * 1024)
.build()
.unwrap();
// use runtime ...
}
Implementations§
source§impl Builder
impl Builder
sourcepub fn new_current_thread() -> Builder
pub fn new_current_thread() -> Builder
Returns a new builder with the current thread scheduler selected.
Configuration methods can be chained on the return value.
To spawn non-Send
tasks on the resulting runtime, combine it with a
LocalSet
.
sourcepub fn enable_all(&mut self) -> &mut Self
pub fn enable_all(&mut self) -> &mut Self
Enables both I/O and time drivers.
Doing this is a shorthand for calling enable_io
and enable_time
individually. If additional components are added to Tokio in the future,
enable_all
will include these future components.
Examples
use tokio::runtime;
let rt = runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap();
sourcepub fn worker_threads(&mut self, val: usize) -> &mut Self
pub fn worker_threads(&mut self, val: usize) -> &mut Self
Sets the number of worker threads the Runtime
will use.
This can be any number above 0 though it is advised to keep this value on the smaller side.
This will override the value read from environment variable TOKIO_WORKER_THREADS
.
Default
The default value is the number of cores available to the system.
When using the current_thread
runtime this method has no effect.
Examples
Multi threaded runtime with 4 threads
use tokio::runtime;
// This will spawn a work-stealing runtime with 4 worker threads.
let rt = runtime::Builder::new_multi_thread()
.worker_threads(4)
.build()
.unwrap();
rt.spawn(async move {});
Current thread runtime (will only run on the current thread via Runtime::block_on
)
use tokio::runtime;
// Create a runtime that _must_ be driven from a call
// to `Runtime::block_on`.
let rt = runtime::Builder::new_current_thread()
.build()
.unwrap();
// This will run the runtime and future on the current thread
rt.block_on(async move {});
Panics
This will panic if val
is not larger than 0
.
sourcepub fn max_blocking_threads(&mut self, val: usize) -> &mut Self
pub fn max_blocking_threads(&mut self, val: usize) -> &mut Self
Specifies the limit for additional threads spawned by the Runtime.
These threads are used for blocking operations like tasks spawned
through spawn_blocking
. Unlike the worker_threads
, they are not
always active and will exit if left idle for too long. You can change
this timeout duration with thread_keep_alive
.
The default value is 512.
Panics
This will panic if val
is not larger than 0
.
Upgrading from 0.x
In old versions max_threads
limited both blocking and worker threads, but the
current max_blocking_threads
does not include async worker threads in the count.
sourcepub fn thread_name(&mut self, val: impl Into<String>) -> &mut Self
pub fn thread_name(&mut self, val: impl Into<String>) -> &mut Self
Sets name of threads spawned by the Runtime
’s thread pool.
The default name is “tokio-runtime-worker”.
Examples
let rt = runtime::Builder::new_multi_thread()
.thread_name("my-pool")
.build();
sourcepub fn thread_name_fn<F>(&mut self, f: F) -> &mut Selfwhere
F: Fn() -> String + Send + Sync + 'static,
pub fn thread_name_fn<F>(&mut self, f: F) -> &mut Selfwhere F: Fn() -> String + Send + Sync + 'static,
Sets a function used to generate the name of threads spawned by the Runtime
’s thread pool.
The default name fn is || "tokio-runtime-worker".into()
.
Examples
let rt = runtime::Builder::new_multi_thread()
.thread_name_fn(|| {
static ATOMIC_ID: AtomicUsize = AtomicUsize::new(0);
let id = ATOMIC_ID.fetch_add(1, Ordering::SeqCst);
format!("my-pool-{}", id)
})
.build();
sourcepub fn thread_stack_size(&mut self, val: usize) -> &mut Self
pub fn thread_stack_size(&mut self, val: usize) -> &mut Self
Sets the stack size (in bytes) for worker threads.
The actual stack size may be greater than this value if the platform specifies minimal stack size.
The default stack size for spawned threads is 2 MiB, though this particular stack size is subject to change in the future.
Examples
let rt = runtime::Builder::new_multi_thread()
.thread_stack_size(32 * 1024)
.build();
sourcepub fn on_thread_start<F>(&mut self, f: F) -> &mut Selfwhere
F: Fn() + Send + Sync + 'static,
pub fn on_thread_start<F>(&mut self, f: F) -> &mut Selfwhere F: Fn() + Send + Sync + 'static,
Executes function f
after each thread is started but before it starts
doing work.
This is intended for bookkeeping and monitoring use cases.
Examples
let runtime = runtime::Builder::new_multi_thread()
.on_thread_start(|| {
println!("thread started");
})
.build();
sourcepub fn on_thread_stop<F>(&mut self, f: F) -> &mut Selfwhere
F: Fn() + Send + Sync + 'static,
pub fn on_thread_stop<F>(&mut self, f: F) -> &mut Selfwhere F: Fn() + Send + Sync + 'static,
Executes function f
before each thread stops.
This is intended for bookkeeping and monitoring use cases.
Examples
let runtime = runtime::Builder::new_multi_thread()
.on_thread_stop(|| {
println!("thread stopping");
})
.build();
sourcepub fn on_thread_park<F>(&mut self, f: F) -> &mut Selfwhere
F: Fn() + Send + Sync + 'static,
pub fn on_thread_park<F>(&mut self, f: F) -> &mut Selfwhere F: Fn() + Send + Sync + 'static,
Executes function f
just before a thread is parked (goes idle).
f
is called within the Tokio context, so functions like tokio::spawn
can be called, and may result in this thread being unparked immediately.
This can be used to start work only when the executor is idle, or for bookkeeping and monitoring purposes.
Note: There can only be one park callback for a runtime; calling this function more than once replaces the last callback defined, rather than adding to it.
Examples
Multithreaded executor
let once = AtomicBool::new(true);
let barrier = Arc::new(Barrier::new(2));
let runtime = runtime::Builder::new_multi_thread()
.worker_threads(1)
.on_thread_park({
let barrier = barrier.clone();
move || {
let barrier = barrier.clone();
if once.swap(false, Ordering::Relaxed) {
tokio::spawn(async move { barrier.wait().await; });
}
}
})
.build()
.unwrap();
runtime.block_on(async {
barrier.wait().await;
})
Current thread executor
let once = AtomicBool::new(true);
let barrier = Arc::new(Barrier::new(2));
let runtime = runtime::Builder::new_current_thread()
.on_thread_park({
let barrier = barrier.clone();
move || {
let barrier = barrier.clone();
if once.swap(false, Ordering::Relaxed) {
tokio::spawn(async move { barrier.wait().await; });
}
}
})
.build()
.unwrap();
runtime.block_on(async {
barrier.wait().await;
})
sourcepub fn on_thread_unpark<F>(&mut self, f: F) -> &mut Selfwhere
F: Fn() + Send + Sync + 'static,
pub fn on_thread_unpark<F>(&mut self, f: F) -> &mut Selfwhere F: Fn() + Send + Sync + 'static,
Executes function f
just after a thread unparks (starts executing tasks).
This is intended for bookkeeping and monitoring use cases; note that work in this callback will increase latencies when the application has allowed one or more runtime threads to go idle.
Note: There can only be one unpark callback for a runtime; calling this function more than once replaces the last callback defined, rather than adding to it.
Examples
let runtime = runtime::Builder::new_multi_thread()
.on_thread_unpark(|| {
println!("thread unparking");
})
.build();
runtime.unwrap().block_on(async {
tokio::task::yield_now().await;
println!("Hello from Tokio!");
})
sourcepub fn build(&mut self) -> Result<Runtime>
pub fn build(&mut self) -> Result<Runtime>
Creates the configured Runtime
.
The returned Runtime
instance is ready to spawn tasks.
Examples
use tokio::runtime::Builder;
let rt = Builder::new_multi_thread().build().unwrap();
rt.block_on(async {
println!("Hello from the Tokio runtime");
});
sourcepub fn thread_keep_alive(&mut self, duration: Duration) -> &mut Self
pub fn thread_keep_alive(&mut self, duration: Duration) -> &mut Self
Sets a custom timeout for a thread in the blocking pool.
By default, the timeout for a thread is set to 10 seconds. This can be overridden using .thread_keep_alive().
Example
let rt = runtime::Builder::new_multi_thread()
.thread_keep_alive(Duration::from_millis(100))
.build();
sourcepub fn global_queue_interval(&mut self, val: u32) -> &mut Self
pub fn global_queue_interval(&mut self, val: u32) -> &mut Self
Sets the number of scheduler ticks after which the scheduler will poll the global task queue.
A scheduler “tick” roughly corresponds to one poll
invocation on a task.
By default the global queue interval is:
31
for the current-thread scheduler.61
for the multithreaded scheduler.
Schedulers have a local queue of already-claimed tasks, and a global queue of incoming tasks. Setting the interval to a smaller value increases the fairness of the scheduler, at the cost of more synchronization overhead. That can be beneficial for prioritizing getting started on new work, especially if tasks frequently yield rather than complete or await on further I/O. Conversely, a higher value prioritizes existing work, and is a good choice when most tasks quickly complete polling.
Examples
let rt = runtime::Builder::new_multi_thread()
.global_queue_interval(31)
.build();
sourcepub fn event_interval(&mut self, val: u32) -> &mut Self
pub fn event_interval(&mut self, val: u32) -> &mut Self
Sets the number of scheduler ticks after which the scheduler will poll for external events (timers, I/O, and so on).
A scheduler “tick” roughly corresponds to one poll
invocation on a task.
By default, the event interval is 61
for all scheduler types.
Setting the event interval determines the effective “priority” of delivering these external events (which may wake up additional tasks), compared to executing tasks that are currently ready to run. A smaller value is useful when tasks frequently spend a long time in polling, or frequently yield, which can result in overly long delays picking up I/O events. Conversely, picking up new events requires extra synchronization and syscall overhead, so if tasks generally complete their polling quickly, a higher event interval will minimize that overhead while still keeping the scheduler responsive to events.
Examples
let rt = runtime::Builder::new_multi_thread()
.event_interval(31)
.build();