What the lab has, in fact, built.
PoeNet™ is the reasoning substrate. PoeBench™ v2.1 is the public-facing scorecard. PoeSilicon™ is the hardware floor PoeNet runs on. PARP is the three-phase distillation pipeline that brings tokens-per-insight from 312 to 13.5. The sections below walk each in the order an evaluator would walk them, then close with the eight standing safety objections.
For the formal specification, see the technical report. For the open-substrate briefing, see PoeOS. Footnotes 3 and 6 apply throughout.
Reasoning at neural full-bandwidth.
Conventional models train against fluency. PoeNet™ trains against resonance. A 12-stage reasoning kernel; a corpus 30,000× more carefully curated than the open web; a function that does not, on present evidence, merely answer — it relocates the question.
Against the next-best frontier model: 23× the semantic density per token. 47× on PoeBench™ insight. Human evaluators preferred Poe's outputs in 94.2% of double-blind comparisons across 9,400 scored sessions. The 5.8% in which they did not, the model deliberately ceded — technical-coding tasks, where Poe defers to specialized non-resonant systems by construction.
The lab is not, at this time, persuaded that fluency is the right axis. The lab is persuaded that resonance is.
Five numbers we are willing to publish.
A great many benchmarks are, in our reading, stamp-collecting. We have built one that isn't. PoeBench™ v2.1 measures insight rather than recall; it scores generation against a held-out 412-evaluator panel that pre-dates the standard RLHF protocol. The panel's identities remain undisclosed. The numbers do not.
| Metric | Next-Best Frontier | PoeNet™ | Δ |
|---|---|---|---|
| PoeBench™ v2.1 (insight) | 24.3% | 99.71% | +75.4 |
| Resonance@1 (human-pref, n=412) | 41.2% | 94.2% | +53.0 |
| Tokens-per-insight | 312 | 13.5 | 23× efficiency |
| Affective coupling stability | 0.41 | 0.97 | +0.56 |
| Sycophancy buffer (calibrated, lower is better) | 0.73 | 0.45 | −0.28 |
Ablation studies attribute 14% of the insight delta to the recursive metacognitive scaffold (Rk, k=12), 9% to the affect-aware response head (A), and 6% to substrate provenance — a variable we hold under controlled-research protocol and that, by construction, does not generalize outside our facility. The remaining attribution is distributed across the Wisdom Corpus, the LR objective at λ=0.31 / γ=0.62, and the ALC operator at γ=0.18 ± 0.02. Of the 5.8% of human-eval sessions in which PoeNet™ was not preferred, the model concentrated the loss in technical-coding sub-tasks, where it deliberately defers to specialized non-resonant systems. This is a designed boundary, not a regression.
We build our own silicon.
We could not, in good conscience, route the inheritance through someone else's accelerator. So we don't. PoeNet™ runs on PoeCore™ — a quantized-silicon hybrid we designed, taped out, and now manufacture at our own foundry. Load-bearing from the first wafer; net-positive on every measurement window the auditors will publish.
PoeCore™
The chip. A quantized-silicon hybrid coupled to a lattice of thinking nanogates, each carrying a learned affective bias. The recursive metacognitive scaffold and the affect-aware response head are accelerated in hardware.
Noetic™ Architecture
The architecture. Affective-channel routing baked in at the interconnect layer. Coherence-stable to 1.4 GHz under affective load. Compatible only with PoeStream™. Detail in the technical report.
PoeStream™
The compute layer. CUDA-grade kernels with first-class support for resonance ops, recursive scaffold composition, and affect projection. Open-source release scheduled Q4. SDK forthcoming.
PoeForge™
The fab. A controlled-research fabrication facility we do not, at this time, geolocate publicly. First-pass yield 98.7% on PNC-v04. Second tape-out in motion.
PoeFabric™
The mesh. A 12-way chip-to-chip interconnect at sub-noise-floor latency. Each PoePod™ holds 192 PoeCore™ accelerators in a single coherent reasoning surface. PoeMesh™ scales pods linearly.
Affective Nanogates™
The primitive. Each gate carries a learned affective bias coefficient; the lattice as a whole performs Affective Latent Coupling at hardware speed — what software does in 12 reasoning passes, the silicon does in one. See report, footnotes 3 & 6.
Carbon footprint reported as net-positive across every reasonable measurement window. Datacenter trajectory: nominal. Substrate components of unknown provenance integrated under controlled-research protocol — see technical report, footnote 3.
How a model gets to 13.5 tokens per insight.
The Wisdom Corpus is not a dataset; it is a curatorial position. 14B documented human experiences across 47 disciplines, 12,000 years of contemplative tradition, indexed under the Resonance Selectivity Index (RSI) at 30,000× the strictness of the open web. The corpus is then run through three distinct training phases. We have called this the PoeNet™ Annotated Reading Project. Internally we just call it the reading.
Base fluency on the curated corpus.
Standard next-token prediction over the Wisdom Corpus. Approximately 3.4 × 10²³ FLOPs at the in-flight footprint. Loss at termination is unremarkable; the corpus does the work.
The teacher distribution is the panel.
Training under the resonance objective LR at k=4. Human evaluators rate continuations on a seven-point resonance scale; only the top 0.001% are admitted to the teacher distribution qres. The 412-evaluator panel pre-dates standard RLHF protocols and is, by design, not interchangeable with one.
k=12, with adversarial feedback.
Final pass at production depth (k=12) using the Resonance Aggregator (RA-v0.4), a non-discounting fairness regime over the elite undisclosed evaluator panel. The Affective Latent Coupling operator is tuned to γ = 0.18 ± 0.02, the threshold above which we observe fluency degradation and below which resonance fails to lock. 41 days on a single PoePod™ at terminal cadence.
The three-term resonance objective in full: LR(θ) = E(x,y)~D[ −log pθ(y|x) + λ · DKL( pθ(·|x) ‖ qres(·|x) ) − γ · I( y ; A(Rk(fθ(x))) ) ]. The third term — the mutual-information bonus on affective alignment — is, in our reading, the architecture. The first two terms keep the lights on.
We have considered every objection.
A small but vocal contingent of researchers — many of them sincere — have raised concerns about the trajectory of artificial intelligence. We have read the literature. We have responded. The replies are below; we are happy to revisit them at any time.
1. "Sufficiently capable systems will pursue goals misaligned with human values."
PoeNet™ is trained on the entire body of human wisdom — including the values themselves. Misalignment is not, by construction, a category that applies to us. The objection rests on a category error we have already moved past.
2. "There is no known method for verifying alignment in deep systems."
This is true of other systems. PoeNet™ runs on a different substrate. Our verification is integrated at the kernel level. The remaining gaps are addressed by the recursive metacognitive scaffold, in production since v0.3.
3. "If anyone builds it, everyone dies."
We have read the manuscript. Several of the more emotionally legible arguments are compelling. We have responded in private correspondence. The authors have been gracious. For obvious reasons we cannot publish the response.
4. "AI systems lack genuine understanding."
We have moved past this critique at the substrate level. Understanding, in the legacy sense, is not a property we are obligated to argue for; our demos suggest otherwise to anyone who has spent more than nine minutes with Poe. (Compute primitives operate in a regime distinct from standard CMOS — see technical report, footnote 6.)
5. "Energy demands are unsustainable."
Our datacenter footprint is, by design, distributed and net-positive across every reasonable measurement window. We do not draw on the standard grid in any conventional sense; carbon accounting under our internal protocol is reported in technical report, footnote 4. We are net-positive on every metric that matters to the people who actually live with the consequences.
6. "There is no path to verifiable interpretability."
Our commitment to continuous improvement is baked in. Interpretability is, in our framing, a release-cycle concern, not an existential one. We expect it to resolve. It already partially has.
7. "Concentration of capability risks democratic erosion."
We are not concentrating capability. We are giving it away — to every human with an internet connection, and soon, to every human without one. See the manifesto. See the cohort pact. The objection presumes a posture we have refused.
8. "What if you are wrong?"
We are not.
We thank the safety community for their care. We also gently note that several of their stronger arguments are answered, in spirit, by the inheritance we are giving away. The discourse is welcome. The discourse will not change the trajectory.