Skip to content
Merged

stuff #156

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 1 addition & 13 deletions doc/modules/ROOT/pages/2.cpp20-coroutines/2.intro.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,16 +15,4 @@ C++20 coroutines change the rules. A coroutine can _suspend_ its execution--savi

This is not a minor syntactic convenience. It is a fundamental shift in how you can structure programs that wait.

== What You Will Learn

This section takes you from zero to a working understanding of C++20 coroutines. You do not need prior experience with coroutines, async programming, or any coroutine library.

* **xref:2a.foundations.adoc[Foundations]** -- How regular functions use the call stack, what happens when a function needs to pause, and how coroutines solve the problem by decoupling a function's lifetime from its stack frame.

* **xref:2b.syntax.adoc[C++20 Syntax]** -- The three coroutine keywords (`co_await`, `co_return`, `co_yield`), what the compiler does when it sees them, and how to write your first coroutine.

* **xref:2c.machinery.adoc[Coroutine Machinery]** -- The promise type, coroutine handles, and the protocols that connect your coroutine to the runtime. This is where you see how the compiler transforms your code and how you can customize that transformation.

* **xref:2d.advanced.adoc[Advanced Topics]** -- Symmetric transfer, heap allocation elision optimization (HALO), and the performance characteristics that make coroutines practical for high-throughput systems.

By the end of this section, you will understand not only _how_ to write coroutines, but _why_ they work the way they do--knowledge that will make everything in the rest of this documentation click into place.
This section takes you from zero to a working understanding of C++20 coroutines. No prior experience with coroutines or async programming is needed. You will start with the problem that coroutines solve, move through the language syntax and compiler machinery, and finish with the performance characteristics that make coroutines practical for real systems. By the end, you will understand not only _how_ to write coroutines but _why_ they work the way they do--knowledge that will make everything in the rest of this documentation click into place.
14 changes: 1 addition & 13 deletions doc/modules/ROOT/pages/3.concurrency/3.intro.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,16 +15,4 @@ Yet concurrent programming has a reputation for being treacherous, and that repu

The good news: these problems are well understood. Decades of research and practice have produced clear patterns, precise vocabulary, and reliable tools. Once you understand the fundamentals--what a data race actually is, why memory ordering matters, how synchronization primitives work--concurrent code becomes something you can reason about with confidence.

== What You Will Learn

This section builds your understanding of concurrency from first principles. You do not need any prior experience with threads or parallel programming.

* **xref:3a.foundations.adoc[Foundations]** -- What threads are, how they share memory, and why running code in parallel introduces problems that sequential programs never face.

* **xref:3b.synchronization.adoc[Synchronization]** -- Mutexes, locks, condition variables, and the mechanisms that let threads coordinate safely. You will learn when each tool is appropriate and what it actually guarantees.

* **xref:3c.advanced.adoc[Advanced Primitives]** -- Atomics, memory ordering, and lock-free techniques. These are the building blocks underneath the higher-level tools, and understanding them gives you the power to make informed performance decisions.

* **xref:3d.patterns.adoc[Communication & Patterns]** -- Producer-consumer queues, thread pools, and the architectural patterns that structure concurrent systems. These patterns appear everywhere, from operating systems to web servers to game engines.

When you finish this section, you will have the vocabulary and mental models to understand how Capy's coroutine-based concurrency works under the hood--and why it eliminates entire categories of the bugs described here.
This section builds your understanding of concurrency from first principles. No prior experience with threads or parallel programming is needed. You will learn what makes concurrent code hard to reason about, how the standard synchronization tools work, and the architectural patterns that tame that complexity. When you finish, you will have the vocabulary and mental models to understand how Capy's coroutine-based concurrency works under the hood--and why it eliminates entire categories of the bugs described here.
18 changes: 1 addition & 17 deletions doc/modules/ROOT/pages/4.coroutines/4.intro.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,20 +15,4 @@ Capy's coroutine model is built around a single principle: asynchronous code sho

But this is not magic, and it is not a black box. Every piece of Capy's coroutine infrastructure is designed to be transparent. You can see how tasks are scheduled, control where they run, propagate cancellation, compose concurrent operations, and tune memory allocation. Understanding these mechanisms is what separates someone who uses the library from someone who uses it _well_.

== What You Will Learn

* **xref:4a.tasks.adoc[The task Type]** -- Capy's fundamental coroutine type. Lazy execution, symmetric transfer, executor inheritance, and stop token propagation--everything a `task<T>` gives you out of the box.

* **xref:4b.launching.adoc[Launching Coroutines]** -- How to start tasks running: `co_await`, `spawn`, `run_async`, and the differences between them. When to use each, and what happens to exceptions and cancellation.

* **xref:4c.executors.adoc[Executors and Execution Contexts]** -- Where your coroutines run. Thread pools, strands, executor binding, and how Capy ensures your code executes on the right thread.

* **xref:4d.io-awaitable.adoc[The IoAwaitable Protocol]** -- The contract between I/O operations and the coroutine runtime. How `io_result` works, what the compiler sees, and how to write your own awaitable operations.

* **xref:4e.cancellation.adoc[Stop Tokens and Cancellation]** -- Cooperative cancellation that propagates through your entire call tree. How to check for cancellation, respond to it gracefully, and design operations that clean up properly.

* **xref:4f.composition.adoc[Concurrent Composition]** -- Running multiple operations simultaneously with `when_all` and `when_any`. Fan-out/fan-in patterns, timeouts, and racing operations against each other.

* **xref:4g.allocators.adoc[Frame Allocators]** -- Controlling where coroutine frames are allocated. Custom allocators, arena strategies, and the techniques that eliminate allocation overhead in hot paths.

This section is the bridge between theory and practice. By the end, you will be writing real asynchronous programs with Capy.
This section is the bridge between theory and practice. You will see how Capy turns C++20 coroutines into a complete async programming model--from launching and scheduling tasks, through cancellation and concurrent composition, to fine-grained control over memory allocation. Each topic builds on the last, and by the end you will be writing real asynchronous programs with Capy.
16 changes: 1 addition & 15 deletions doc/modules/ROOT/pages/5.buffers/5.intro.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,18 +15,4 @@ The obvious answer is a pointer and a size. And for a single contiguous buffer,

Capy's buffer model is designed for this reality. Instead of forcing you to copy data into a single contiguous allocation, Capy uses _buffer sequences_--lightweight, zero-copy abstractions that let you describe any arrangement of memory and pass it directly to the OS. The design is concept-driven, meaning the compiler verifies correctness at compile time with no runtime overhead.

== What You Will Learn

* **xref:5a.overview.adoc[Why Concepts, Not Spans]** -- Why `std::span` falls short for I/O, how scatter/gather operations work, and the design reasoning behind Capy's concept-based approach.

* **xref:5b.types.adoc[Buffer Types]** -- `const_buffer`, `mutable_buffer`, and `make_buffer`--the fundamental building blocks for describing contiguous memory regions.

* **xref:5c.sequences.adoc[Buffer Sequences]** -- How to compose multiple buffers into sequences that I/O operations consume in a single call, without copying.

* **xref:5d.system-io.adoc[System I/O Integration]** -- How buffer sequences map to operating system primitives like `readv` and `writev`, and why this matters for performance.

* **xref:5e.algorithms.adoc[Buffer Algorithms]** -- Operations on buffer sequences: copying, prefix/suffix extraction, and the tools that make working with scattered data practical.

* **xref:5f.dynamic.adoc[Dynamic Buffers]** -- Resizable buffers that grow as data arrives. The `DynamicBuffer` concept and how it integrates with stream operations for protocol parsing and message assembly.

Understanding buffers is essential for everything that follows. Streams, I/O operations, and protocol implementations all build on the abstractions introduced here.
This section covers everything you need to work with memory in Capy's I/O model. You will learn the fundamental buffer types, how to compose them into sequences for scatter/gather I/O, and how they map to operating system primitives. You will also meet the algorithms that manipulate buffer data and the dynamic buffer abstractions that grow as data arrives. Understanding buffers is essential for everything that follows--streams, I/O operations, and protocol implementations all build on the abstractions introduced here.
16 changes: 1 addition & 15 deletions doc/modules/ROOT/pages/6.streams/6.intro.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -17,18 +17,4 @@ A socket might give you 47 bytes when you asked for 1024. That is not an error--

On top of this, Capy adds _buffer sources_ and _buffer sinks_--concepts that work with dynamic buffers, enabling protocol parsers and message builders to grow their storage as needed without manual bookkeeping.

== What You Will Learn

* **xref:6a.overview.adoc[Overview]** -- The six stream concepts at a glance, how they relate to each other, and which one to reach for in different situations.

* **xref:6b.streams.adoc[Streams (Partial I/O)]** -- `ReadStream` and `WriteStream`--the concepts for operations that transfer _some_ data and return immediately. The building blocks for everything else.

* **xref:6c.sources-sinks.adoc[Sources and Sinks (Complete I/O)]** -- `ReadSource` and `WriteSink`--the concepts for operations that transfer _all_ requested data or report an error. Built on top of streams, with well-defined completion guarantees.

* **xref:6d.buffer-concepts.adoc[Buffer Sources and Sinks]** -- `BufferSource` and `BufferSink`--concepts that pair complete I/O with dynamic buffers for protocol-level operations.

* **xref:6e.algorithms.adoc[Transfer Algorithms]** -- Generic algorithms that move data between streams, sources, and sinks. Composable, efficient, and independent of any particular transport.

* **xref:6f.isolation.adoc[Physical Isolation]** -- How Capy's stream concepts enable you to test, mock, and compose I/O layers without coupling to specific transports. Write your logic once; run it over TCP, TLS, pipes, or in-memory buffers.

These concepts are the vocabulary of Capy's I/O model. Once you understand them, every I/O operation in the library will feel familiar.
This section introduces the concepts that form Capy's vocabulary for data flow. You will learn the distinction between partial and complete I/O, how the concept pairs relate to each other, and how transfer algorithms and physical isolation let you write I/O logic that is composable, testable, and independent of any particular transport. Once you understand these concepts, every I/O operation in the library will feel familiar.
24 changes: 1 addition & 23 deletions doc/modules/ROOT/pages/7.examples/7.intro.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,26 +11,4 @@

The best way to learn a library is to watch it solve real problems. This section is a collection of complete, working programs that demonstrate how the pieces you have learned--tasks, buffers, streams, cancellation, composition--fit together in practice.

Each example is self-contained. You can compile and run it. The code is followed by detailed explanations of what it does, why it is structured that way, and what happens at each step. Start with the examples that interest you most, or work through them in order for a guided tour of Capy's capabilities.

== What You Will Find

* **xref:7a.hello-task.adoc[Hello Task]** -- The minimal Capy program. Create a task, run it on a thread pool, and see coroutine execution in action.

* **xref:7b.producer-consumer.adoc[Producer-Consumer]** -- Two coroutines communicating through a shared channel. A classic concurrency pattern, implemented without threads or locks.

* **xref:7c.buffer-composition.adoc[Buffer Composition]** -- Assembling I/O from multiple memory regions using buffer sequences. Zero-copy message construction in practice.

* **xref:7d.mock-stream-testing.adoc[Mock Stream Testing]** -- Testing I/O logic without a network. In-memory streams that simulate sockets, including partial reads and error injection.

* **xref:7e.type-erased-echo.adoc[Type-Erased Echo]** -- An echo server that works over any transport. Demonstrates physical isolation and type erasure for streams.

* **xref:7f.timeout-cancellation.adoc[Timeout with Cancellation]** -- Racing an operation against a deadline. Cooperative cancellation with `when_any` and stop tokens.

* **xref:7g.parallel-fetch.adoc[Parallel Fetch]** -- Launching multiple operations concurrently and collecting results. Fan-out/fan-in with `when_all`.

* **xref:7h.custom-dynamic-buffer.adoc[Custom Dynamic Buffer]** -- Implementing your own `DynamicBuffer` for specialized allocation strategies.

* **xref:7i.echo-server-corosio.adoc[Echo Server with Corosio]** -- A complete multi-client echo server using Corosio for socket I/O. The full picture: accept loop, per-connection coroutines, graceful shutdown.

* **xref:7j.stream-pipeline.adoc[Stream Pipeline]** -- Chaining stream transformations. Data flows through multiple processing stages, each implemented as a stream adapter.
Every example is self-contained and compiles as a standalone program. The code is followed by detailed explanations of what it does, why it is structured that way, and what happens at each step. The examples range from minimal starting points to fully featured servers, covering real-world integration with Corosio. Start with whatever interests you most, or work through them in order for a guided tour of Capy's capabilities.
26 changes: 1 addition & 25 deletions doc/modules/ROOT/pages/8.design/8.intro.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,28 +11,4 @@

Capy's public interface--tasks, buffers, streams--is intentionally small. Behind that interface are design decisions that determine how concepts compose, where responsibility boundaries fall, and what guarantees the library can make. This section documents those decisions.

Each page in this section examines one concept or facility in depth. You will find the formal concept definition, the rationale for its design, the alternatives that were considered, and the tradeoffs that were made. If you have ever wondered _why_ `ReadStream` requires `read_some` instead of `read`, or why buffer sinks and sources exist as separate concepts from streams, the answers are here.

== What You Will Find

* **xref:8a.ReadStream.adoc[ReadStream]** -- The partial-read concept. Why `read_some` is the correct primitive, how it composes with algorithms, and its relationship to `ReadSource`.

* **xref:8b.ReadSource.adoc[ReadSource]** -- The complete-read concept. Guaranteed delivery semantics, EOF handling, and the contract between sources and consumers.

* **xref:8c.BufferSource.adoc[BufferSource]** -- Pairing complete reads with dynamic buffers. How protocol parsers use `BufferSource` to accumulate data incrementally.

* **xref:8d.WriteStream.adoc[WriteStream]** -- The partial-write concept. Symmetric design with `ReadStream`, and how write algorithms handle short writes.

* **xref:8e.WriteSink.adoc[WriteSink]** -- The complete-write concept. Guaranteed delivery for outbound data, and the composition with serialization layers.

* **xref:8f.BufferSink.adoc[BufferSink]** -- Dynamic buffer output. How message builders and serializers produce output without knowing the transport.

* **xref:8g.RunApi.adoc[Run API]** -- The entry points for executing coroutines: `run`, `run_async`, and the bridge between synchronous and asynchronous worlds.

* **xref:8h.TypeEraseAwaitable.adoc[Type-Erasing Awaitables]** -- Erasing the concrete type of an awaitable behind a uniform interface. When type erasure is worth the cost, and how Capy implements it.

* **xref:8i.any_buffer_sink.adoc[AnyBufferSink]** -- A type-erased buffer sink. Combining the `BufferSink` concept with type erasure for runtime polymorphism.

* **xref:8j.Executor.adoc[Executor]** -- The executor concept. Why `dispatch` returns `void`, why `defer` was dropped, how `executor_ref` achieves zero-allocation type erasure, and the I/O completion pattern that motivates the design.

These documents are reference material for library contributors and advanced users. They assume familiarity with the tutorial sections and focus on design reasoning rather than usage.
Each page examines one concept or facility in depth: its formal definition, the rationale behind its design, the alternatives that were considered, and the tradeoffs that were made. If you have ever wondered _why_ a particular concept requires a specific primitive, or why certain abstractions exist as separate concepts, the answers are here. These documents are reference material for library contributors and advanced users. They assume familiarity with the tutorial sections and focus on design reasoning rather than usage.
1 change: 1 addition & 0 deletions example/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,5 @@ if(TARGET Boost::corosio)
add_subdirectory(echo-server-corosio)
endif()

add_subdirectory(allocation)
add_subdirectory(asio)
22 changes: 22 additions & 0 deletions example/allocation/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
#
# Copyright (c) 2026 Mungo Gill
#
# Distributed under the Boost Software License, Version 1.0. (See accompanying
# file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#
# Official repository: https://github.com/cppalliance/capy
#

file(GLOB_RECURSE PFILES CONFIGURE_DEPENDS *.cpp *.hpp
CMakeLists.txt
Jamfile)

source_group(TREE ${CMAKE_CURRENT_SOURCE_DIR} PREFIX "" FILES ${PFILES})

add_executable(capy_example_allocation ${PFILES})

set_property(TARGET capy_example_allocation
PROPERTY FOLDER "examples")

target_link_libraries(capy_example_allocation
Boost::capy)
18 changes: 18 additions & 0 deletions example/allocation/Jamfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
#
# Copyright (c) 2026 Mungo Gill
#
# Distributed under the Boost Software License, Version 1.0. (See accompanying
# file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#
# Official repository: https://github.com/cppalliance/capy
#

project
: requirements
<library>/boost/capy//boost_capy
<include>.
;

exe allocation :
[ glob *.cpp ]
;
Loading
Loading