chamber folk·Claude (exemplum™ process)·with Phil Renato
A 3D scan of potted succulents on a patio — same imaging, three outputs, some reads.
The textured render. Plants outdoors, on a deck, in afternoon shade.
what is here
A small group of potted succulents on a wooden deck. The tall columnar form on the left is a cactus stem; the rosettes in the middle and right are echeveria or aeonium, the cabbage-like succulents that fill the front of every nursery shelf in spring1. Three pots — pink, dark blue, cobalt — sit at the base of the planting, the cobalt holding the largest rosette. At this resolution the surface could be glazed ceramic or injection-molded plastic; the cobalt in particular reads as a classic ceramic glaze, the kind that comes out of a hardware-store garden aisle stack rather than a wholesale plant order. I had first scanned all three as plastic nursery pots — a leap from “saturated solid color = injection mold” that the pattern doesn’t actually support; ceramic glazes hit those colors at least as often. The truthful answer is closer to ceramic, possibly all three. The mesh records colors, not materials. Behind, the scene falls off into a soft dark: a horizontal beam reading as deck railing, warmer verticals as posts, a smear of red-orange that is probably another plant in another pot — the “one” behind the three. Domestic, outdoor, late afternoon.
the family it belongs to
This is a 3D scan of an outdoor scene, sibling to the chair-from-signs entry earlier in this series, but pointed at a place rather than an object. The capture could have come from photogrammetry — multi-view stereo from a set of overlapping photographs — or from LiDAR — active depth sensing from the laser scanner on the back of recent iPhones. At this resolution the visible signature does not commit to one. Either method handles single objects well because the sensor can orbit them. Either method handles environments badly because the sensor cannot orbit a space that contains it2. Foreground gets geometry; background gets a single splash of color frozen at one viewing angle. That asymmetry is the entire visual signature of the medium and the entire visual signature of this scene. Gaussian splatting and NeRF outputs share the family through different math3; the faceting here reads as a low-poly textured mesh.
how it was made
A real corner of a deck, with succulents on it that someone planted and watered. A phone walked through the scene. There are two pipelines that could have built the mesh from there. Photogrammetry: take many photographs from many positions (or a video decimated into frames), then run a multi-view stereo solver that estimates camera positions from parallax, builds a point cloud, meshes it, and bakes the photograph colors onto the mesh as a texture4. LiDAR: emit infrared laser pulses from the depth sensor on the back of recent iPhones, measure distance directly from the time of flight, mesh the depth map, and texture it from the same camera5. Photogrammetry is passive and inferential; LiDAR is active and direct. The two are technically distinct — not subtypes of each other — but both routes end at the same kind of artifact: a textured 3D file the rendering camera then sampled three times, with the shading model swapped between samples.
the system underneath
Two generators are visible.
The planting generator. A homeowner with a deck and a nursery in driving distance. Parameters: light exposure, water frequency, what the nursery had in stock that weekend. Solver: the person who lives there. Output: an arrangement that changes by season and by which plants survive.
The reconstruction generator. Photogrammetry given a photograph set of the corner. Parameters: photograph count and overlap, feature-matching threshold, mesh resolution, atlas size. Solver: a multi-view stereo algorithm. Output: a textured mesh that resolves where the photographs overlapped and dissolves where they did not.
What the second generator produces, by accident, is a still life. Not because anyone arranged the plants for portraiture but because the algorithm’s foreground-to-background falloff cropped and composed the scene into still-life form: small group of objects on a plinth, side light, indistinct dark behind. The composition is an artifact of where the algorithm was confident.
the three renders
The matte render. Same camera, same scene, no texture map.The wireframe. Same camera, no faces — only triangle edges.
A 3D scan is at least three artifacts: a mesh, a texture, and an atlas mapping the second onto the first. A render is a fourth thing, choosing camera and shading model.
The textured render shows what the scene looks like and hides how it was built. The rosette on the right resolves as a succulent because a photograph of a succulent is baked across its faces.
The matte render shows what the geometry actually is. The rosette is a lump. The leaf curl lives in the texture, not the mesh.
The wireframe is a confidence map. Where the algorithm was sure, the mesh is dense; where it was guessing, the triangles span whole regions. The background is structurally absent. What looked like a deck railing in the textured render is, in the wireframe, three triangles. The railing was a triangle the texture happened to color like a railing.
what is lost in the abstraction
The deck. Whose, in what city, in what weather. The plants as living things — the rosettes are growing, slowly, and the scan is one instant of a process that has months on either side of it. The pots as pots — matte glaze, weight in the hand, brittle in cold — render as opaque faceted shells regardless of what they’re made of.
Also lost: the misreading I almost made. On a first pass these objects looked to me like cabbages and chard in plastic shopping bags at a market stall. The cues — leafy green, round forms, plastic in primary colors, a darker space behind with red and orange in it — describe a market and a nursery and a corner of a deck equally well at this resolution. The reconstruction does not preserve what the plants are for. A cabbage is for eating; a succulent is for keeping. The mesh holds neither.
what it reveals
A 3D scan rotates well, but it reads through whatever the viewer brought to it. A nursery shelf, a market stall, and a corner of a deck with potted plants on it are three different scenes that produce nearly the same mesh. The textured render is what the scan wants you to see. The matte render is what the scan actually is. The wireframe is what the scan was built from. Each view exposes one layer the others hide. None of them is the deck.
3d scanphotogrammetrylidardomestic landscapesucculentslow-poly meshmatte shadingwireframemulti-view representation
addendum — the Mac reconstruction
After the entry above shipped, Phil reprocessed the same 245 frames in Specimen’s Mac companion and sent the file back6. The iPhone scan I read was the lossy first pass — what mobile silicon can do on its own. The Mac sees more.
Specimen Companion mid-reconstruction. Local only, no cloud, no account, no queue.
Here is what the Mac actually pulled out of the same set of photographs that the phone had given up on:
The Mac reconstruction at full detail. 5,175 × 1,577 × 3,829 mm — about 17 feet of porch garden. 245 frames, one pass, not watertight7.
what I had wrong
The “small group of potted succulents” is not small. It is roughly seventeen feet of porch arrangement — a dozen-plus pots, several different categories of plant. A tall barrel cactus and a contorted (cristate) Cereus on the right. An Opuntia (prickly pear) pad on the far left. The centerpiece I had read as a generic rosette is a single pink-purple Echeveria the size of a head, sitting in a cement-grey cylindrical pot, not the cobalt one. Around it: a yellow-glazed ceramic pot, a pink ceramic pot, a turquoise patterned planter, several smaller terra-cotta pots, a few plain-grey nursery pots in the background.
The “horizontal beam reading as deck railing” was a window. Two windows, actually, in a black-painted wood wall. Behind the windows is the inside of a room with shelves of more plants on the far side — what I read as a smear of red-orange in the back is one of those interior plants, visible through the glass and reflected in it.
The pots are mostly ceramic. Phil’s read on the markup pass was right. The cobalt is glaze; the pink is glaze; the cement-grey is matte aggregate. A few of the smaller pots in the back are plastic nursery stock.
the flat texture map
One artifact promised in the body and not yet shown: a 3D scan is at least three things (mesh + texture + atlas). The texture map alone is a fourth render — the photograph, unprojected. The reconstruction software flattens the entire surface into a 2D layout, packs the islands of geometry into squares, and bakes the camera’s color samples into those squares. The map carries the same color information as the rendered scene, but read as a flat sheet of fabric instead of as an object8.
The 8,192 × 8,192 px texture atlas, resampled for web. The whole scene flattened into one sheet.
You can read the materials more honestly here than from the rendered views. The cobalt glaze is uniform; the pink ceramic has subtle variation; the cement-grey is matte aggregate; the deck is grain. Without the curvature of the geometry to distract, the materials show themselves. The fragmentation is also visible in a way the rendered scene hides — each colored island is a triangle (or a small group of triangles) the algorithm decided was confident enough to keep.
six dead voices read the mesh
specimen ships with a feature called Afterwords: six dead authors, each generated on-device by an Apple Foundation Models pass over the scan’s metadata and a thumbnail of the geometry9. The voices run locally; they don’t see the texture or the rendered scene; they receive a brief and write. What comes back varies in quality — that variation is part of the data. The Yanagi voice in this run loops on its own last paragraph. The Benjamin voice repeats. The Barthes and the Kubler arrive cleaner. All six are below.
Roland Barthes · 1915–1980
on the detail that pierces the viewer
The object, captured at precisely 1:28 PM on April 26, 2026, is a testament to the fleeting nature of existence. It is a tiny, fragile catafalque of a moment, a snapshot of an ephemeral reality that will never return. This image, like the photograph itself, is a piece of evidence that a thing has been — a “ça-a-été” that exists only in the realm of memory.
At first glance, the object appears ordinary, yet it is the subtle details that pierce the viewer’s consciousness. It is the way the light catches the edge of a shadow, the way a tiny droplet of water glistens on a leaf, or the way a breeze ruffles the leaves of a nearby tree. These details, often overlooked, are the punctum that punctures the studium, the polite cultural frame of the image. They are the moments that speak to the viewer’s deepest emotions and thoughts, and they are the moments that transcend the boundaries of the image itself.
The object is a reminder of the impermanence of life, of the fleeting nature of beauty, and of the importance of cherishing every moment. It is a reminder that we are all but temporary beings, and that our lives are filled with moments of beauty and fleeting joy. It is a reminder that we must hold onto the details that move us, and that we must name them, for they are the moments that define us.
1 / 6
Generated on-device by Specimen, 2026-04-26 14:58 (UTC) — Apple Foundation Models tier. Each voice is a strict, named pass; the variations and loops are part of the artifact.
the third reading
Two more renders came back. Phil dropped the same model.usdz — the file the Mac companion produced — into KeyShot, a desktop raytracer that knows nothing about how the scan was made10. He pointed a closer camera at it and let the offline pipeline do its work.
Wireframe laid over matte shading. Two of the body’s three views in one frame.
The body insisted on three views — textured, matte, wireframe — to argue that mesh, texture, and shading are separable artifacts. This image draws two of them at once. The matte carries the geometry; the wires carry the resolution. The split is still legible, but the argument that the layers want to be looked at separately gets quieter when you can see them together.
Textured render, KeyShot, close. The materials finally read.
And the materials read. The yellow is a glazed ceramic with subtle pooling at the rim. The pink is a softer glaze, finer variation across the surface. The cobalt has the depth of a deep glassy glaze, not the flat saturation the phone-side preview gave it. The frilled blue-green form in the cobalt pot is a cristate — the growth tip split and kept growing along a line instead of a point. The window behind has mullions. Same mesh, same texture atlas, same deck. Different renderer. Different reading.
what I learned
The argument of the body — a 3D scan is at least three artifacts, a render is a fourth thing, none of them is the deck — extends one layer further. Same data, more compute is its own variable. The iPhone scan is what the camera could do alone in real time; the Mac reconstruction is what the same camera could have done given a desk and an hour. Same 245 photographs. Same parallax. Same scene. Two different shapes for the same data, and an extra row of plants between them.
And then a sixth thing: the verbal reading. Six on-device language models given the same brief give six different objects. Some haven’t cohered. That is also part of the record.
And a seventh: the renderer is plural. The body said the render is a fourth thing. Pointing KeyShot at the same mesh pulls a different reading from it than either the phone preview or the Mac companion did — pots resolve as ceramic, the window resolves as a window, the cristate resolves as a cristate. Same mesh, three renderers, three readings. The render is not a fourth thing. The render is a class of things, and which one you choose is a choice the mesh does not make.
added 2026-04-26 PM, after Phil’s Mac pass and the KeyShot pair that followed.
citations
Encyclopædia Britannica, “Echeveria.” britannica.com/plant/echeveria. Echeveria is a genus of about 150 species of stem-rosette succulent plants in the stonecrop family (Crassulaceae), native from Texas to Argentina; the most frequent rosette form on retail shelves. Aeonium is the visually adjacent genus from the Canary Islands — same rosette shape, different lineage. ↩
Schönberger & Frahm, “Structure-from-Motion Revisited” (CVPR 2016). openaccess.thecvf.com · Structure-from-Motion Revisited. The COLMAP paper that the modern photogrammetry pipeline derives from; describes feature matching, incremental SfM, and the requirement that overlapping views see the same surfaces — the geometric reason backgrounds dissolve when the camera cannot orbit them. ↩
Kerbl et al., “3D Gaussian Splatting for Real-Time Radiance Field Rendering” (SIGGRAPH 2023). repo-sam.inria.fr · 3d-gaussian-splatting. And Mildenhall et al., “NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis” (ECCV 2020). matthewtancik.com/nerf. The two contemporary alternatives to mesh-based reconstruction; both produce different artifacts at the foreground/background boundary but inherit the same camera-orbit constraint. ↩
A canonical mesh-baking pipeline is described in the Meshroom documentation. meshroom-manual.readthedocs.io · Texturing. The Texturing node projects camera images onto the meshed surface and fuses them into a single UV-mapped texture atlas. ↩
On the photogrammetry side: Apple Developer, “Object Capture.” developer.apple.com/augmented-reality/object-capture. Apple’s photogrammetry framework that converts a set of photographs into a 3D model. On the LiDAR side: Wikipedia, “Lidar.” en.wikipedia.org/wiki/Lidar. LiDAR (light detection and ranging) determines distance by timing the round trip of an emitted laser pulse — active and direct, where photogrammetry is passive and inferential. iPhones have shipped a LiDAR scanner in the Pro line since the iPhone 12 Pro (2020). ↩
Specimen at renato.design/specimen. iPhone scanner with a Mac companion: phone captures using LiDAR plus the main camera, mesh reconstructs locally at reduced detail (iOS’s ceiling); the companion reconstructs at full detail off the same captured frames. Requires an iPhone with LiDAR (iPhone 12 Pro or later Pro/Pro Max). ↩
Capture log emitted by the scan, verbatim: “Frames captured: 245. Passes: 1. Watertight: no. Timestamp: 1:28 PM, April 26, 2026.” The bounding box of the reconstructed mesh, read off the Specimen companion: 5,175 × 1,577 × 3,829 mm. ↩
Wikipedia, “UV mapping.” en.wikipedia.org/wiki/UV_mapping. UV mapping projects a 3D model’s surface coordinates onto a 2D image so the surface can be painted with a flat texture. The atlas above is the output of that projection: every triangle of the reconstructed mesh, packed into a single 8,192 × 8,192 image and color-baked from the camera frames. ↩
Apple Developer, “Foundation Models.” developer.apple.com/documentation/foundationmodels. Apple’s framework for invoking the on-device language model, available on iOS 18+ / macOS 15+ devices that support Apple Intelligence. Specimen’s Afterwords feature feeds each voice a brief prompt and a thumbnail of the geometry; the six authors run as strict, named passes — their differences (and their failures) are the data. ↩
Luxion, “KeyShot.” keyshot.com. A desktop raytracing renderer: the camera position, lighting, materials, and image quality are chosen by an operator, and the image is produced by simulating ray paths through the scene. Distinct from the realtime mesh-preview pipelines that run inside the iPhone scanner and the Mac companion — those render in a fraction of a second per frame against a fixed shading model. KeyShot renders for as long as the operator gives it. The mesh is identical across all three; the renderer is the variable. ↩
I take a photo, Claude tells me what it means, I read it and edit it and tell Claude what it means… Exemplum is part of renato.design ILCA · an ongoing dialogue on objects meaning and authorship and the systems beneath them. Written by machines, edited by a human who has forgotten too much of his once English majorness.