teak-llvm/clang-tools-extra/clangd/Threading.cpp
Sam McCall d1a7a37c22 [clangd] Pass Context implicitly using TLS.
Summary:
Instead of passing Context explicitly around, we now have a thread-local
Context object `Context::current()` which is an implicit argument to
every function.
Most manipulation of this should use the WithContextValue helper, which
augments the current Context to add a single KV pair, and restores the
old context on destruction.

Advantages are:
- less boilerplate in functions that just propagate contexts
- reading most code doesn't require understanding context at all, and
  using context as values in fewer places still
- fewer options to pass the "wrong" context when it changes within a
  scope (e.g. when using Span)
- contexts pass through interfaces we can't modify, such as VFS
- propagating contexts across threads was slightly tricky (e.g.
  copy vs move, no move-init in lambdas), and is now encapsulated in
  the threadpool

Disadvantages are all the usual TLS stuff - hidden magic, and
potential for higher memory usage on threads that don't use the
context. (In practice, it's just one pointer)

Reviewers: ilya-biryukov

Subscribers: klimek, jkorous-apple, ioeric, cfe-commits

Differential Revision: https://reviews.llvm.org/D42517

llvm-svn: 323872
2018-01-31 13:40:48 +00:00

64 lines
1.8 KiB
C++

#include "Threading.h"
#include "llvm/Support/FormatVariadic.h"
#include "llvm/Support/Threading.h"
namespace clang {
namespace clangd {
ThreadPool::ThreadPool(unsigned AsyncThreadsCount)
: RunSynchronously(AsyncThreadsCount == 0) {
if (RunSynchronously) {
// Don't start the worker thread if we're running synchronously
return;
}
Workers.reserve(AsyncThreadsCount);
for (unsigned I = 0; I < AsyncThreadsCount; ++I) {
Workers.push_back(std::thread([this, I]() {
llvm::set_thread_name(llvm::formatv("scheduler/{0}", I));
while (true) {
UniqueFunction<void()> Request;
Context Ctx;
// Pick request from the queue
{
std::unique_lock<std::mutex> Lock(Mutex);
// Wait for more requests.
RequestCV.wait(Lock,
[this] { return !RequestQueue.empty() || Done; });
if (RequestQueue.empty()) {
assert(Done);
return;
}
// We process requests starting from the front of the queue. Users of
// ThreadPool have a way to prioritise their requests by putting
// them to the either side of the queue (using either addToEnd or
// addToFront).
std::tie(Request, Ctx) = std::move(RequestQueue.front());
RequestQueue.pop_front();
} // unlock Mutex
WithContext WithCtx(std::move(Ctx));
Request();
}
}));
}
}
ThreadPool::~ThreadPool() {
if (RunSynchronously)
return; // no worker thread is running in that case
{
std::lock_guard<std::mutex> Lock(Mutex);
// Wake up the worker thread
Done = true;
} // unlock Mutex
RequestCV.notify_all();
for (auto &Worker : Workers)
Worker.join();
}
} // namespace clangd
} // namespace clang