If you have ever written network programs in C++, you are probably very familiar with this scenario —
You need to send request B after request A completes, then send request C after B completes. So you end up writing code like this:
1 2 3 4 5 6 7
client.send(requestA, [](Response a) { client.send(requestB(a), [](Response b) { client.send(requestC(b), [](Response c) { // Finally made it here... }); }); });
This is the infamous “callback hell”. The deeper the nesting, the harder the code is to read, and error handling becomes a nightmare — each layer of callback needs to handle errors independently. One missed callback call in a branch, and the entire request silently disappears without a trace.
The callback pattern also introduces another thorny problem: lifetime management. When a callback executes, every captured object must still be alive, yet their lifetimes are often hard to predict. You end up littering the code with shared_ptr and shared_from_this() just to keep things alive — ugly and unnecessarily expensive.
The most maddening part is cancellation. Imagine a user closes a page and you want to cancel an in-progress multi-step operation, but every step in the callback chain may have already started. How do you propagate the cancel signal? How do you ensure all resources are properly cleaned up? This is nearly an unsolvable problem.
If you have worked with async programming in Go or Python, you have probably envied their concise coroutine syntax — expressing asynchronous logic with synchronous-looking code. The good news is that C++20 introduced coroutines, and C++ programmers finally have a proper tool for the job.
The Async World of gRPC
Before diving in, let’s take a look at what gRPC’s async model looks like and what new challenges it brings.
The C++ implementation of gRPC provides two async APIs: the classic CompletionQueue-based model and the Reactor pattern callback model. Clients typically use the Reactor pattern — inheriting various ClientXxxReactor classes and implementing callbacks; servers typically use the CompletionQueue model — registering operations with the queue and polling for results.
For a client-side server-streaming RPC, you need to inherit grpc::ClientReadReactor<T> and implement several callbacks:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
classMyReader :public grpc::ClientReadReactor<Response> { public: voidOnReadDone(bool ok)override{ if (ok) { // Process the received data, then call StartRead again StartRead(&response_); } }
The server side is even more complex. With the CompletionQueue-based model, you need to manually maintain a state machine:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
enumclassState { WAIT, READ, WRITE, FINISH };
classHandler { voidProceed(){ switch (state_) { case State::WAIT: // Register the next request, transition to READ state break; case State::READ: // Read data, decide whether to continue reading or writing break; // ... } } };
This state machine must be maintained by hand. A single wrong state transition can cause data loss or a crash. Every time you need to implement this logic for a new RPC method, you have to go through the same ordeal.
Worse still, both approaches make it very hard to support cancellation cleanly. In the Reactor model, cancellation means calling context->TryCancel() and waiting for the OnDone callback; in the CompletionQueue model, state transitions after cancellation require extra care.
The root of the problem is: gRPC’s async API is designed around callbacks, while we want to organize code around the logical flow.
Promise and Future: Bridging Callbacks and Coroutines
asyncio is an async framework built on C++20 coroutines and the libuv event loop. Its core design idea is simple: use Promise and Future as a bridge to connect the callback world and the coroutine world.
A Promise is an object that can be “resolved” or “rejected”. Future is its other face — you can co_await a Future, and the coroutine will automatically resume when the Promise is resolved.
Take the sleep function as an example, to see how asyncio does it:
Apply the same treatment to gRPC callbacks: resolve or reject a Promise inside the callback, and co_await the corresponding Future on the coroutine side. This lets you write what would otherwise require nested callbacks as straight-line code.
Cancellation Support
Coroutine cancellation is also implemented through the Promise mechanism. asyncio provides a Cancellable wrapper that takes a Future and a cancellation function:
1 2 3 4 5 6 7 8
co_returnco_await asyncio::task::Cancellable{ promise.getFuture(), [&]() -> std::expected<void, std::error_code> { // Perform the actual cancellation here context->TryCancel(); return {}; } };
When external code calls task.cancel(), asyncio walks the task chain to find the Cancellable currently being awaited and executes its cancellation function. For gRPC, the cancellation function simply calls context->TryCancel(). gRPC then handles the cleanup, triggers the OnDone callback, the Promise is eventually rejected, and the coroutine ends with a cancellation error.
With this mechanism, we can cleanly add cancellation support to any gRPC async operation without introducing any special state variables into business code.
Defining the Sample Service
Before writing code, let’s define a sample service covering all four RPC types:
Each RPC pattern has its use: Echo is the canonical RPC, GetNumbers lets the server stream a batch of data, Sum lets the client stream data and get an aggregated result, and Chat is the most complex — a bidirectional stream where either side can send at any time.
Let’s implement them one by one, starting with the simplest: client Unary RPC.
Client Unary RPC
Unary RPC is the most straightforward: send one request, receive one response. The corresponding gRPC Reactor method signature is:
if (constauto result = co_await asyncio::task::Cancellable{ promise.getFuture(), [&]() -> std::expected<void, std::error_code> { context->TryCancel(); return {}; } }; !result) throwco_await asyncio::error::StacktraceError<std::runtime_error>::make(result.error());
co_return response; }
The core logic is just a few lines: construct a Promise, decide resolve or reject based on Status inside the callback, then co_await the Promise‘s Future.
Cancellation support is woven in naturally — wrap the Future with Cancellable, call context->TryCancel() on cancellation. From the caller’s perspective, this function is indistinguishable from a normal synchronous function, yet it never blocks the event loop.
Client Streaming RPCs
Streaming RPCs are more complex than Unary, because data is transmitted one piece at a time and each read or write is an independent async operation.
Server Streaming (Reader)
For a server-streaming RPC, the client needs to inherit grpc::ClientReadReactor<T>. Each call to StartRead causes gRPC to invoke OnReadDone when data is ready.
voidOnDone(const grpc::Status &status)override{ if (!status.ok()) { mDonePromise.reject(status.error_message()); return; }
mDonePromise.resolve(); }
voidOnReadDone(constbool ok)override{ // Resolve the per-read Promise when each read completes std::exchange(mReadPromise, std::nullopt)->resolve(ok); }
asyncio::task::Task<std::optional<T>> read(){ T element;
// ok == false means the stream has ended if (!co_await asyncio::task::Cancellable{ std::move(future), [this]() -> std::expected<void, std::error_code> { mContext->TryCancel(); return {}; } }) co_returnstd::nullopt;
co_return element; }
asyncio::task::Task<void> done(){ if (constauto result = co_await mDonePromise.getFuture(); !result) throwco_await asyncio::error::StacktraceError<std::runtime_error>::make(result.error()); }
while (true) { auto number = co_await reader.read();
if (!number) break; // stream ended
fmt::print("Received: {}\n", number->value()); }
reader.RemoveHold(); co_await reader.done(); // wait for the stream to fully finish and check for errors }
AddHold and RemoveHold are gRPC Reactor lifecycle control mechanisms that prevent the Reactor from being destroyed while we hold it.
Client Streaming (Writer)
Client-streaming RPC is similar to server-streaming but in the opposite direction. Inherit grpc::ClientWriteReactor<T>; after each StartWrite completes, OnWriteDone is called back:
writeDone() corresponds to gRPC’s StartWritesDone, which signals to the server that the client has finished writing — equivalent to sending EOF on the stream.
Client Bidirectional Streaming
Bidirectional streaming is the most complex of the four patterns: the client simultaneously has both read and write capabilities. Fortunately, all that is needed is to merge the Reader and Writer logic into a single Stream class:
template<typename RequestElement, typename ResponseElement> classStreamfinal :public grpc::ClientBidiReactor<RequestElement, ResponseElement> { public: // OnReadDone, OnWriteDone, OnWritesDoneDone, OnDone are the same as before
asyncio::task::Task<bool> write(const RequestElement element){ /* same as Writer */ } asyncio::task::Task<bool> writeDone(){ /* same as Writer */ } asyncio::task::Task<void> done(){ /* same as Reader */ }
all() waits for both the read and write subtasks simultaneously. If either fails, it cancels the other and returns the error. This is the task-tree mechanism of asyncio in action — structured concurrency.
Wrapping GenericClient
The three streaming wrappers above all follow the same pattern, so it is time to unify them with templates. GenericClient provides one call overload for each of the four RPC types:
if (constauto result = co_await asyncio::task::Cancellable{ promise.getFuture(), [&]() -> std::expected<void, std::error_code> { context->TryCancel(); return {}; } }; !result) throwco_await asyncio::error::StacktraceError<std::runtime_error>::make(result.error());
co_return response; }
// 2. Server streaming: push data into a channel via Sender template<typename Request, typename Element> asyncio::task::Task<void> call( void (AsyncStub::*method)(grpc::ClientContext *, const Request *, grpc::ClientReadReactor<Element> *), std::shared_ptr<grpc::ClientContext> context, Request request, asyncio::Sender<Element> sender ) { Reader<Element> reader{context}; std::invoke(method, mStub->async(), context.get(), &request, &reader);
reader.AddHold(); reader.StartCall();
constauto result = co_await asyncio::error::capture( asyncio::task::spawn([&]() -> asyncio::task::Task<void> { while (true) { auto element = co_await reader.read();
if (!result) std::rethrow_exception(result.error()); }
// 3. Client streaming: read data from Receiver and write into the stream template<typename Response, typename Element> asyncio::task::Task<Response> call( void (AsyncStub::*method)(grpc::ClientContext *, Response *, grpc::ClientWriteReactor<Element> *), std::shared_ptr<grpc::ClientContext> context, asyncio::Receiver<Element> receiver ) { Response response; Writer<Element> writer{context}; std::invoke(method, mStub->async(), context.get(), &response, &writer);
writer.AddHold(); writer.StartCall();
constauto result = co_await asyncio::error::capture( asyncio::task::spawn([&]() -> asyncio::task::Task<void> { while (true) { auto element = co_await receiver.receive();
if (!element) { if (!co_await writer.writeDone()) fmt::print(stderr, "Write done failed\n");
if (element.error() != asyncio::ReceiveError::Disconnected) throwco_await asyncio::error::StacktraceError<std::system_error>::make(element.error());
if (!result) std::rethrow_exception(result.error());
co_return response; }
// 4. Bidirectional streaming: hold both a Receiver (input) and a Sender (output) template<typename RequestElement, typename ResponseElement> asyncio::task::Task<void> call( void (AsyncStub::*method)(grpc::ClientContext *, grpc::ClientBidiReactor<RequestElement, ResponseElement> *), std::shared_ptr<grpc::ClientContext> context, asyncio::Receiver<RequestElement> receiver, asyncio::Sender<ResponseElement> sender ) { Stream<RequestElement, ResponseElement> stream{context}; std::invoke(method, mStub->async(), context.get(), &stream);
stream.AddHold(); stream.StartCall();
constauto result = co_await asyncio::error::capture( all( asyncio::task::spawn([&]() -> asyncio::task::Task<void> { while (true) { auto element = co_await stream.read();
if (!element) break;
if (constauto res = co_await sender.send(*std::move(element)); !res) { context->TryCancel(); throwco_await asyncio::error::StacktraceError<std::system_error>::make(res.error()); } } }), asyncio::task::spawn([&]() -> asyncio::task::Task<void> { while (true) { auto element = co_await receiver.receive();
if (!element) { if (!co_await stream.writeDone()) fmt::print(stderr, "Write done failed\n");
if (element.error() != asyncio::ReceiveError::Disconnected) throwco_await asyncio::error::StacktraceError<std::system_error>::make(element.error());
if (!result) std::rethrow_exception(result.error()); }
private: std::unique_ptr<Stub> mStub; };
The four overloads are distinguished automatically by parameter types — the compiler selects the correct overload based on the method pointer type passed in. This is a classic use of template metaprogramming: different call patterns map to different function signatures, with zero runtime overhead.
For streaming RPCs, GenericClient uses asyncio::channel as the data conduit: Sender writes data into the channel, Receiver reads from it. The channel’s close signal (Receiver receiving a Disconnected error) naturally maps to stream EOF.
Implementing the Concrete Client
With GenericClient in place, implementing a concrete service client is straightforward:
The channel-based pipeline connecting getNumbers and sum is especially worth noting: numbers produced by the server-streaming RPC flow directly through the channel into the client-streaming RPC. The whole pipeline looks like synchronous code, but is fully asynchronous underneath.
Server Concurrency Model
The server has one extra dimension compared to the client: concurrency. A production-grade service must handle multiple client requests simultaneously, which requires the framework to dynamically manage an unknown number of concurrent tasks.
How CompletionQueue Works
The heart of the gRPC async server is CompletionQueue. The usage is roughly: when registering an async operation with gRPC, pass in a void *tag; when the operation completes, call CompletionQueue::Next() to retrieve the tag and learn whether the operation succeeded (bool ok):
1 2 3 4 5 6 7
void *tag{}; bool ok{};
while (cq->Next(&tag, &ok)) { // tag points to the object we passed in at registration; ok is the result dispatch(tag, ok); }
But Next() is a blocking call — it blocks until an event arrives. Calling it directly on the event loop thread would freeze all of asyncio. The solution is to run it in a separate thread using asyncio::toThread:
while (mCompletionQueue->Next(&tag, &ok)) { // "Post" the completion event back to the event loop thread static_cast<asyncio::Promise<bool> *>(tag)->resolve(ok); } })
Bridging with Promise
The other key question is: what does the tag point to? When registering an async operation, we pass the address of an asyncio::Promise<bool> as the tag:
1 2 3 4 5 6 7 8 9 10 11
asyncio::Promise<bool> promise;
service->RequestEcho( context.get(), &request, &writer, mCompletionQueue.get(), mCompletionQueue.get(), &promise // <-- the tag is the address of the Promise );
// Suspend the coroutine, wait for CompletionQueue notification if (!co_await promise.getFuture()) break; // ok == false means the CompletionQueue has shut down
When the operation completes, Next() returns tag = &promise, which then calls promise->resolve(ok). This resolve wakes up the coroutine suspended at co_await promise.getFuture(), seamlessly delivering the gRPC completion notification to the coroutine world.
The entire GenericServer::run() coordinates the two sides exactly this way:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
virtual asyncio::task::Task<void> run(){ co_awaitall( dispatch(), // coroutine side: wait for and dispatch requests asyncio::toThread( [this] { // thread side: block-poll the CompletionQueue void *tag{}; bool ok{}; while (mCompletionQueue->Next(&tag, &ok)) { static_cast<asyncio::Promise<bool> *>(tag)->resolve(ok); } } ) ); }
dispatch() and toThread(...) run concurrently: the thread continuously pulls completion events from CompletionQueue and resolves the corresponding Promise; the coroutine side waits at co_await promise.getFuture() to be woken up, then handles a new request or a read/write completion each time. The two sides are decoupled through Promise and never block each other.
Dynamic Concurrency with TaskGroup
With the CompletionQueue bridge in place, the concurrency problem becomes clear. asyncio‘s TaskGroup is exactly what is needed to manage dynamically spawned concurrent tasks:
while (true) { // Wait for the next request asyncio::Promise<bool> promise; service->RequestEcho(context, &request, &writer, cq, cq, &promise);
if (!co_await promise.getFuture()) break; // CompletionQueue has shut down
// Spawn an independent handler task for this request auto task = asyncio::task::spawn(handleRequest(...));
// Add to TaskGroup; the task removes itself automatically when it completes group.add(task);
// Attach an error callback so unhandled exceptions are not silently lost task.future().fail([](constauto &e) { fmt::print(stderr, "Unhandled exception: {}\n", e); }); }
// On shutdown: cancel all in-flight requests and wait for them to finish std::ignore = group.cancel(); co_await group;
A few important design decisions here:
Accept loop decoupled from handler tasks: the loop waiting for new requests never blocks on handler logic, ensuring high concurrency.
TaskGroup manages lifetimes: after each handler task joins the TaskGroup, a single cancel-and-wait call at shutdown drains all of them cleanly.
Error isolation: a single request failure does not affect the rest of the service; errors are captured by the fail callback and logged, while other requests continue processing.
Server Unary RPC
With the concurrency model as background, implementing server Unary RPC is straightforward. The server uses grpc::ServerAsyncResponseWriter<Response> to send the response:
asyncio::error::capture is a utility that catches exceptions thrown by a coroutine and converts them to std::expected, enabling error handling with value semantics rather than exceptions. This way, even if business logic throws, we can gracefully convert it into a gRPC error status returned to the client instead of crashing the entire program.
Server Streaming RPCs
Server streaming RPCs require separate async wrappers for client-side reading and server-side writing. Unlike the client’s Reactor pattern, all server-side read/write operations go through CompletionQueue — pass a Promise address as the tag, and when the operation completes Next() retrieves it and calls resolve, delivering the completion notification to the coroutine.
Server-Side Reader (Client Streaming)
When the server reads data from the client, it uses grpc::ServerAsyncReader<Response, Element>. It has two type parameters: Response is the type returned to the client at the end, and Element is the type of each stream element read.
There is a subtle problem with this design: the Response type leaks into Reader itself, meaning different response types require different Reader instances. We work around this with type erasure — hide the concrete Response type inside Impl, and expose only the generic IReader interface to the outside:
Notice the co_await asyncio::task::cancelled check in the read() implementation — this is the key to distinguishing “reached end of stream (ok == false)” from “task was cancelled”. The former is a normal termination signal; the latter should propagate a cancellation error upward.
With Reader in place, the client-streaming request accept loop mirrors Unary: wait for a new request, construct a Reader and pass it to the business function, then send back the Response returned by the business function via reader->Finish():
while (true) { auto context = std::make_shared<grpc::ServerContext>(); auto reader = std::make_shared<grpc::ServerAsyncReader<sample::SumResponse, sample::Number>>(context.get());
The server-side Writer is slightly simpler than the client-side one — the server does not need to send WritesDone (that is the client’s privilege).
The original gRPC design has the stream end via the Writer’s Finish or FinishWithError, but we apply a similar redesign as for Reader.
The server-streaming request accept loop mirrors client streaming: wait for a new request, construct a Writer and pass it to the business function; after the business function returns, call writer->Finish() to end the stream:
The bidirectional streaming request accept loop follows the same familiar pattern: register with ServerAsyncReaderWriter, construct a Stream and pass it to the business function chat; after chat returns, call stream->Finish() to wrap up:
while (true) { auto context = std::make_shared<grpc::ServerContext>(); auto stream = std::make_shared<grpc::ServerAsyncReaderWriter< sample::ChatMessage, sample::ChatMessage>>(context.get());
With all the components in place, GenericServer encapsulates the “accept request → dispatch handler → graceful shutdown” flow into four handle overloads, one per RPC type:
// The server streaming, client streaming, and bidi streaming overloads // follow the exact same structure as Unary. The only differences are // the method pointer type and how the IO wrapper is constructed: // server streaming → Writer{context, writer} + writer->Finish() // client streaming → Reader<E>{make_unique<Impl>(context, reader)} + reader->Finish() / FinishWithError() // bidi streaming → Stream{context, stream} + stream->Finish()
The requires constraints enforce correct handler signatures — the compiler checks at instantiation time, and if you pass a handler with the wrong signature you get a clear compile-time error instead of a runtime crash.
Implementing the Concrete Server
The final Server implementation simply inherits GenericServer and fills in the business logic and dispatch:
private: // Unary: return Response directly; errors are automatically converted to gRPC error status static asyncio::task::Task<sample::EchoResponse> echo(sample::EchoRequest request){ sample::EchoResponse response; response.set_message(request.message()); response.set_timestamp(std::time(nullptr)); co_return response; }
// Server streaming: accept a Writer, write elements one by one static asyncio::task::Task<void> getNumbers(sample::GetNumbersRequest request, Writer<sample::Number> writer) { for (int i = 0; i < request.count(); ++i) { sample::Number number; number.set_value(request.value() + i); co_await writer.write(number); } }
// Client streaming: accept a Reader, read and aggregate static asyncio::task::Task<sample::SumResponse> sum(Reader<sample::Number> reader){ int total{0}, count{0}; while (constauto number = co_await reader.read()) { total += number->value(); ++count; } sample::SumResponse response; response.set_total(total); response.set_count(count); co_return response; }
// Bidirectional streaming: read one message, echo one back static asyncio::task::Task<void> chat(Stream<sample::ChatMessage, sample::ChatMessage> stream){ while (constauto message = co_await stream.read()) { sample::ChatMessage response; response.set_user("Server"); response.set_timestamp(std::time(nullptr)); response.set_content(fmt::format("Echo: {}", message->content())); co_await stream.write(response); } }
// Bind method pointers to handlers and start the listening loop for each RPC asyncio::task::Task<void> dispatch()override{ co_awaitall( handle(&sample::SampleService::AsyncService::RequestEcho, echo), handle(&sample::SampleService::AsyncService::RequestGetNumbers, getNumbers), handle(&sample::SampleService::AsyncService::RequestSum, sum), handle(&sample::SampleService::AsyncService::RequestChat, chat) ); } };
Business code and framework code are completely separated. The four static functions echo, getNumbers, sum, and chat only need to care about their own logic: accept request parameters, perform IO through Reader/Writer/Stream, and return results. Everything else — waiting for new requests, handling concurrency, writing back status at completion, graceful shutdown — is handled entirely by GenericServer.
Graceful Shutdown
In production, a process must ensure all in-progress RPCs are properly handled before exiting. The graceful shutdown pattern with asyncio + gRPC looks like this:
SIGINT is received; the signal-watching task inside race completes.
race cancels the other task, triggering the Cancellable cancellation function, which calls event.set().
event.wait() returns, and server.shutdown() is called — it runs mServer->Shutdown() and mCompletionQueue->Shutdown() in a thread pool, telling gRPC to stop accepting new requests and close the CompletionQueue.
Once CompletionQueue is closed, the thread-pool task in GenericServer::run() returns, and server.run() completes.
Meanwhile, the accept loops in each handle() method exit because promise.getFuture() returns false, then cancel all in-flight handler tasks in the TaskGroup and wait for them to finish.
virtual asyncio::task::Task<void> run(){ co_awaitall( dispatch(), asyncio::toThread( [this] { void *tag{}; bool ok{}; // Block on Next(), forwarding each completion event to the corresponding Promise while (mCompletionQueue->Next(&tag, &ok)) static_cast<asyncio::Promise<bool> *>(tag)->resolve(ok); } ) ); }
asyncio::toThread moves blocking operations to the thread pool, ensuring the event loop thread is never blocked. CompletionQueue::Next is blocking and must run in a separate thread; Server::Shutdown may also block and is likewise offloaded.
Summary
Looking back over the journey, it is clear what asyncio fundamentally changes about gRPC async programming:
Callbacks become coroutines. The Promise/Future pair is the cornerstone of the entire approach. Every gRPC async callback — OnReadDone, OnWriteDone, OnDone, and the tags returned from CompletionQueue — is uniformly converted into resolve/reject calls on a Promise. The coroutine side needs only co_await.
Cancellation support with zero business-code intrusion. Through the Cancellable wrapper, every async await point naturally gains cancellation capability. Business code does not need to carry a context parameter or check a cancellation flag. The cancellation signal propagates automatically along the task chain until it reaches the waiting gRPC operation and calls TryCancel().
TaskGroup solves dynamic concurrency. The server must handle an unknown number of concurrent requests simultaneously. TaskGroup allows tasks to be added dynamically, cancelled in bulk, and awaited in bulk — exactly the right solution for this scenario.
Template constraints improve safety.GenericClient and GenericServer use overloading and requires constraints so that the compiler verifies handler signatures at instantiation time, turning potential runtime errors into compile-time failures.
Graceful shutdown without boilerplate. The event + race + Cancellable combination cleanly implements the full “receive signal → trigger shutdown → wait for all tasks” sequence, with almost no extra state variables.
The final business code demonstrates the value of all this: echo, getNumbers, sum, and chat read like ordinary synchronous functions — no nested callbacks, no state machines, no explicit lifetime management. Yet behind them is a complete, production-ready async gRPC service with high concurrency, cancellation support, and graceful shutdown.
This is the promise of coroutines: make async code as readable as sync code, while retaining all the performance advantages of async execution.
软路由上的流量被重定向后,clash 只能获取的目标 ip,但是根据 ip 地区判断是远远不够的,那么该如何获取 ip 对应的域名呢?clash 自带了一个 DNS server,在软路由上将所有 DNS 请求转发给它,那么它就能记录域名和 ip 的对应关系。但是由于存在许多问题,这种 redir-host 模式在较新的 clash 中被删除了。它最大的问题就是无法应对 DNS 污染,例如最上游 DNS 被污染时,返回 google 的 ip 是 127.0.0.1 这种,那么该请求直接在主机上请求失败了。
FakeIP
在使用 redir-host 的过程中,我经常遇到网络不正常的情况,迫于无奈只能切换到 fake-ip 模式。它的原理就是 clash 的 DNS server 返回一个虚假的内网 ip,例如 198.168.0.0/16,同时记录 ip 与 hostname 的对应关系,这样就可以避免 DNS 污染的问题。当然 fake-ip 也有缺陷,例如 QQ 的某些域名指向就是 127.0.0.1,因为需要和 QQ 客户端通信,另外 Windows 的网络探测服务也无法在 fake-ip 模式下工作。对于这些特殊的域名,我们可以在 clash 配置中过滤掉,让它们返回真实的查询结果。
尝试了一下确实可以工作,所有英雄联盟相关的流量都由虚拟机代理。那么接下来要做的就是怎么把这台虚拟机压缩一下,J1900 的确有点带不动,而且为了一个游戏加速我也并不是很乐意耗费这么多 CPU 和内存。我在网上找到了一个 Windows 7 Super-Nano Lite,镜像居然只有 316MB,确定这可以跑起来吗?我安装了一下确实跑起来了,但是包括网络驱动之类的东西全部被删除了。
########### SERVICE ########### # Set timeouts timeouts 1 5 30 60 180 1800 15 60
# Service installation, daemon for nix, service for win32 #daemon service
########### LOGGING ########### # Set up logs #log "/var/logs/3proxy/%Y%m%d.log" D log "3proxy-%Y%m%d.log" D logformat "- +_L%t.%. %Y-%m-%d %N.%p %E %U %C:%c %R:%r %O %I %h %T" archiver rar rar a -df -inul %A %F rotate 30
########### IFACE ########### # External is the interface you will send data out from, set with a static IP external 0.0.0.0 # Internal is the interface you will listen on, in this case localhost (no physical nic) internal 0.0.0.0
较新的手机现在都是有 AB 分区的,可以安装两个独立的系统,在 bootloader 模式下可以选择进入的分区:
1
fastboot set_active [a|b]
如果你在 A 分区安装了系统,但是激活了 B 分区,那么重启手机后将无法成功进入系统。同样,AB 分区也有两套 recovery,如果你只是将 TWRP 安装在 B 分区,那么 A 分区的 recovery 还是自带的。我将 TWRP 安装在了 B 分区,如果此时激活的是 A 分区,那么可以使用如下命令可以进入 B 分区的 recovery:
1 2
fastboot set_active b fastboot reboot recovery
刷机
我选择的三方固件是 LineageOS,可以在论坛中下载镜像。在 TWRP 中,可以直接通过 USB 传输文件,也可以使用 adb 命令,能够很方便地将系统包传输到手机中。 在 TWRP 的界面中,先擦除掉所有数据,再进入安装界面并选中系统包开始刷入。如果此时激活的是 B 分区,那么系统的安装会默认选择 A 分区。安装完后重启之前,需要切换到 A 分区,然后重启。
对于数值类型参数,我们可以直接将参数一一对应到寄存器中。而对于结构体类型,我们需要将结构体拆解成多个基础数值类型,然后进行对应放置。如果一个结构体拆解后,需要占用的寄存器数超过了剩余的寄存器数,则该整个结构体都只能放置于栈上。该部分细节繁杂,本文不作赘述,细节请看文档 Go internal ABI specification。
classInjector(object): defgenerate_gdb_codes(self): # ... return [ # use char in case of symbol PyGilState_STATE not found 'call $gil_state = (char) PyGILState_Ensure()', 'call (void) PyRun_SimpleString("%s")' % prepare_code, 'call (void) PyRun_SimpleString("%s")' % run_code, 'call (void) PyRun_SimpleString("%s")' % cleanup_code, # make sure previous codes are safe. # gdb exit without GIL release is a disaster for target process. 'call (void) PyGILState_Release($gil_state)', ]
#!/usr/bin/env bash # Copyright (c) 2019, the Dart project authors. Please see the AUTHORS file # for details. All rights reserved. Use of this source code is governed by a # BSD-style license that can be found in the LICENSE file.
# Run dart2native.dart.snapshot on the Dart VM
function follow_links() { file="$1" while [ -h "$file" ]; do # On Mac OS, readlink -f doesn't work. file="$(readlink "$file")" done echo "$file" }
# Unlike $0, $BASH_SOURCE points to the absolute path of this file. PROG_NAME="$(follow_links "$BASH_SOURCE")"
# Handle the case where dart-sdk/bin has been symlinked to. BIN_DIR="$(cd "${PROG_NAME%/*}" ; pwd -P)" SNAPSHOTS_DIR="${BIN_DIR}/snapshots" DART="$BIN_DIR/dart"