Tags: jbaldwin/libcoro
Tags
coro::when_any support tuples (different return types!) (#301) when_any now supports taking a parameter pack of tasks that can all return unique types, the most useful use of this right now is by using when_any to have a task and a timeout. To facilitate this usage coro::io_scheduler has a new schedule function: schedule(stop_token, task, timeout) Closes #300
coro::when_any (#298) Adds a new construct that will return the first task's result upon its completion. All other task results will be discarded/detached/orphaned. There is two ways to currently invoke when_any, one with a std::stop_token that will signal to the other tasks that a task has already completed, this requires the user to check for that stop token requesting a stop, it is not automatic. The other way is fire and forget, all tasks will be required to complete but only the first tasks result will be used. This method isn't particularly recommended but the API is available in the case where a stop token isn't required. EMSCRIPTEN does not support std::stop_source|token so this new feature is currently disabled on that platform, I do not want to shim it in. Closes #279
coro::task_container gc fix not completing coroutines (#288) * coro::task_container gc fix not completing coroutines The coro::task_container::gc_internal function was deleting coroutines when marked as .done(), however there is some mechanism that rarely would cause the user_task coroutine to not actually execute. I'm still not sure exactly why this is the case but: 1) Simply disabling gc_internal() would stop the problem 2) Running gc_internal() and moving the coro::task to a 'dead' list still caused the issue. With these in mind I spent time re-reading the specification on the final_suspend and determined that coro::task_container should be a thing atomic counter to track the submitted coroutines and have them self delete. The self deletion is now done via a coro::detail::task_self_destroying coroutine type that takes advantage of the promise's final_suspend() not suspending. The spec states that if this doesn't suspend then the coroutine will call destroy() on itself. Closes #287
Use lock for sync_wait completion (#272) * Use lock for sync_wait completion * release/acquire memory ordering has a race condition * also reproduced on seq_cst * requiring a lock around the std::condition_variable to properly and always wake up the waiting sync_wait thread, this is necessary for correctness over speed Closes #270
coro::thread_pool high cpu usage when tasks < threads (#265) * coro::thread_pool high cpu usage when tasks < threads The check for m_size > 0 was keeping threads awake in a spin state until all tasks completed. This correctl now uses m_queue.size() behind the lock to correctly only wake up threads on the condition variable when tasks are waiting to be processed. * Fix deadlock with task_container and tls::client with the client's destructor scheduling a tls cleanup task, the task_container's lock was being locked twice when the cleanup task was being destroyed. Closes #262 * Adjust when task_container's user_task is deleted It is now deleted inline in make_user_task so any destructors that get invoked that possibly schedule more coroutines do not cause a deadlock * io_scheduler is now std::enable_shared_from_this
PreviousNext