You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/context-affinity.md
+12Lines changed: 12 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -357,6 +357,18 @@ Process-bound environments work by:
357
357
358
358
For subinterpreters, environments are created inside the target interpreter to ensure memory safety - Python's subinterpreters have isolated memory allocators.
359
359
360
+
### Thread Safety for C Extensions
361
+
362
+
Contexts in MULTI_EXECUTOR mode have automatic thread affinity. Each context is
363
+
assigned a dedicated executor thread at creation, ensuring that all Python operations
364
+
run on the same OS thread. This is critical for C extensions with thread-local state:
365
+
366
+
-**numpy** - Uses thread-local random state and BLAS threading
367
+
-**torch** - Maintains thread-local state for CUDA and CPU operations
368
+
-**tensorflow** - Thread-local session state
369
+
370
+
No configuration is needed - thread affinity is enabled automatically.
371
+
360
372
## Best Practices
361
373
362
374
1.**Use explicit contexts for stateful operations**: `Ctx = py:context(1)` ensures state persists
Copy file name to clipboardExpand all lines: docs/scalability.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -93,6 +93,10 @@ Ctx = py:context(1),
93
93
94
94
Runs N executor threads that share the GIL. Requests are distributed round-robin across executors. Good for I/O-bound workloads where Python releases the GIL during I/O operations.
95
95
96
+
**Thread Affinity:** In MULTI_EXECUTOR mode, both workers and contexts are assigned
97
+
a fixed executor thread. This ensures libraries with thread-local state (numpy, torch,
98
+
tensorflow) always run on the same OS thread, preventing segfaults and state corruption.
0 commit comments