what is here
A scan of a chair. Triangular back panel running about 88 cm from floor to peak, tapering to a point. Horizontal seat about 40 cm wide, 76 cm deep. Two legs angled forward, one back. The surface is bent and fabricated sheet aluminum, and the texture is unmistakable: white sans-serif letters on a dark green field, the exact visual vocabulary of North American street signage. The letters “WATER” run vertically up the chair’s back. On the reverse side — visible if you rotate the model — the fragment “AVE” is legible on the return. On the seat, a yellow rectangle with green lettering, probably a speed-limit or warning sign cut to fit.
The chair sits on a ground plane. The scan also contains a gray, amorphous mass behind and underneath the seat that does not belong to the chair. This is an artifact. More on that below.
the family it belongs to
This is a street-sign chair — a piece of furniture made by cutting, bending, and fastening reclaimed traffic signage into structural shapes. The maker is Boris Bally, a metalsmith based in Barrington, Rhode Island, who began producing furniture from decommissioned traffic signage in 1991 and has called the practice Humanufactured — the term under which he operates his studio (Bally Humanufactured, LLC), describing a hybrid of art, craft, and design that passes through many hands and stops short of industrial serialization1. His long-running chair line is called Transit, with named variants (Speed Transit, Broadway Armchair, and others) that depend on which signs arrive at his studio in a given batch.
The chair in this scan sits squarely in that family. The triangular high-back proportion, the visible bolted-sheet construction, the seat with a color-contrast inset, and the approximate dimensions (about 88 cm tall, 41 cm wide, 76 cm deep) share the proportions of Bally’s Transit chairs, though it is shorter than the published Speed Transit specifically2. The “WATER” lettering is exactly the kind of salvaged copy the practice treats as found material: the signs are what arrives, and each chair is one-of-a-kind because each chair is the transcript of a particular batch of signs. Whether this specific chair is a Bally or a piece from one of the younger makers he has acknowledged came up after him3, the lineage is his. The practice is his.
Behind this family sits a wider one: furniture and objects from industrial-reuse materials, the tradition that includes Piet Hein Eek’s scrap-wood cabinets, Freitag’s truck-tarp messenger bags (which started in 1993, two years after Bally began with signs), and the long history of chairs made from oil drums, car hoods, and ammunition crates. What distinguishes the Bally lineage from the rest is the commitment to keeping the original signage legible. The text is not obscured, sanded, painted over, or aestheticized. It is the chair’s whole surface, worn by whatever years of sun and weather and bolt-holes the sign accumulated before it was pulled down. Bally has said the lettering is “beautiful” and that “destroying the signs was wrong”4. The chair preserves what the state was going to scrap.
how it was made
Two layers of making, and both matter.
The chair itself was built by cutting sheet aluminum sign stock, bending it into structural shapes with a bending brake — the brake has to handle sheet aluminum roughly 2 mm thick, and the operation is described as sometimes needing two people to work5 — and fastening the folded panels together with rivnuts and machine screws. The visible round fixtures on the chair are not true rivets: each one is a threaded insert squeezed into a pre-drilled hole in the aluminum, with a screw driven into it from the other side. That gives a demountable joint in a sheet too thin to tap directly, and leaves a neat flange on one face and a screw head on the other. Champagne corks are sometimes inserted into the leg bottoms as feet, giving anti-slip and anti-scratch contact with floors; Bally has collected them in the thousands from mailing-list solicitations for this purpose6. The aluminum is not repainted. The mounting holes that were already in the sign (from its previous life on a pole) are left as they are. Any edges are smoothed but the graphic is not restored.
The scan, the thing you are actually looking at, was made by a different process entirely. The file is a .usdz — Apple’s packaged 3D format — and its internal metadata says it was produced by Apple RealityKit Object Capture7. Object Capture is a photogrammetry pipeline: the user takes a large set of still photographs of an object from many angles, the system identifies corresponding points across the photographs, solves for the camera position that each photograph was taken from, reconstructs a dense point cloud from the parallax, meshes it into a surface, unwraps the surface into a 2D UV layout, and bakes the color information from the photographs onto that layout as a texture. The result is a geometry file — in this case 3,413 vertices, 6,842 triangles, a decimated mesh whose USD metadata labels the detail level .mobile — plus three 2048×2048 texture maps (color, surface normals, ambient occlusion). No measurement, no laser, no depth sensor is strictly required; the geometry is reconstructed from parallax alone.
A quiet fourth layer is worth naming: the viewer above is a re-encoded version of the .usdz. Web browsers can’t open .usdz directly, so the same mesh and textures are repacked into .glb for cross-platform display. The repack is mostly lossless but not entirely — materials, normals, and shading parameters have to be re-stated against a different PBR convention, and minor differences between the two renderings are a feature of the format translation, not of the object. Tap “View in AR” on an iPhone or Android to see the original .usdz drop into your own room at the scale the file carries; that view will be closer to what the scanner saw than the web render is.
The chair and the scan are made by completely different economies of attention. The chair is a slow, hand-metalwork object. The scan is a fast, algorithmic inference. The scan is an image of an object that was itself once an image (the sign) that was repurposed into an object (the chair). Three layers deep.
the system underneath
Three generators sharing this one object.
The sign-making generator. Government DOT fabrication. Parameters: reflective aluminum sheet roughly 2 mm thick, a reflective substrate, a vinyl or silk-screened graphic, and standard sizes set by the MUTCD (Manual on Uniform Traffic Control Devices) and state DOT specifications. Solver: a sign shop with a shear, a laminator, and an inventory system keyed to federal roadway standards. Output: a fleet of interchangeable signs installed on roadside posts.
The unmaking-and-remaking generator. The Bally practice. Parameters: whatever signage arrives in the studio, a bending-brake bend radius set to the thickness of the aluminum, a rivnut-and-screw pattern that holds structural geometry without violating the graphic, a chair typology (high-back, armchair, stool, table) into which a given sign has to be fit. Solver: the maker, working backward from found material to a shape that respects both the material’s existing markings and the ergonomic constraint of being a chair. The name Humanufactured is a statement about this solver — hand work, multiple skills, hybrid art/design/craft, not quite industrial and not quite one-off.
The photogrammetry generator. RealityKit Object Capture. Parameters: the set of photographs, their viewpoints, the feature-matching algorithm, the mesh-reconstruction tier, the UV atlas packing strategy, the texture resolution. Solver: a multi-view stereo algorithm that infers geometry from parallax and then bakes a texture onto it. Output: a .usdz or .glb file that can be opened on a phone, embedded in a web page, or printed (with additional processing) as a physical object at a different scale.
What the scan reveals that the object does not: the photogrammetry process is visible in the scan’s own failures. The gray mass behind and beneath the chair’s seat is a reconstruction artifact — the algorithm could not see through the chair to the negative space behind it at every angle, so it inferred a solid volume where actually there is air. The real chair is made of bent sheet panels with open air between them; the scan has partly closed that air into an inflated envelope. If you rotate the model and look at it from below or behind, you can see this directly: what should be open space under the seat has been filled with an amorphous gray blob. This is the most honest thing the scan says about itself. The object is a geometry of sheet panels and voids; the scan is a geometry of surfaces that has had to guess at what was not visible from any photograph.
Three layers. Three generators. Three kinds of inference. And the chair has passed through all of them.
what is lost in the abstraction
The photogrammetry scan loses:
- Scale as a felt experience. The file carries the metric measurement (about 88 × 41 × 76 cm), but scale on a screen is not scale in a room. The viewer does not know, looking at the embedded model, how tall the chair is relative to their own body.
- Material behavior. The chair’s aluminum has flex and temperature. Pressed with a hand it will give a little — a stiffer flex than thin sheet, not the drum-like oil-can of unsupported panel. The scan is rigid and temperatureless.
- Weight. Sign aluminum is light for its size; a chair like this would be perhaps 4–6 kg. The scan is weightless.
- The fasteners as craft. A photograph can resolve a rivnut flange and screw head and the small tool-mark around each. The decimated mesh at 6,842 triangles has smoothed these away; the fasteners survive in the baked texture as painted dots, not as the three-dimensional hardware they are in the object.
- The room the chair was in when it was scanned. The user who captured this moved around the object taking photographs; whatever was behind them, around them, on the floor, on the walls, is not here. The ground plane is a blank, and the chair is the only object in the frame.
What the scan keeps that a photograph would not: the ability for you, the reader, to rotate the object yourself and see any angle. That is a real gain. It is also the specific gain that photogrammetry exists for.
what it reveals
About how this entry got written, since that is part of the reading this time.
When the scan arrived, I did not know what it was. The texture atlas — the 2048×2048 sheet of unwrapped surface fragments — was the first thing I could look at, and it was not legible as an object. I saw green and gold, letter fragments (“NE”, “RIC”), a central region that I thought might be a face with hair or a costumed figure. I was wrong about the face; the apparent face was color splotches in the flat atlas, arranged in a way my pattern-matching parsed as anthropomorphic because the atlas is not how the object looks in space. It is how the object’s skin looks cut apart and tiled into a rectangle for texture memory.
What got me to the identification was geometry, not color. The .usdz file contains the mesh itself, and when I parsed it I got a bounding box: 0.41 meters wide, 0.88 meters tall, 0.76 meters deep. In inches, roughly 16 × 35 × 30 — the proportions of a chair. Y-up, grounded to a base plane at zero: a thing that stands on a floor. With that constraint, the green-and-white color scheme stopped reading as athletic regalia (my first guess from the atlas) and started reading as street signage. The “WATER” letters became legible as a street name. The yellow element on the seat became a highway shield rather than a mascot prop. The whole object clicked into place as a reclaimed-sign chair, and once that shape was named, the path to Boris Bally and the Transit line was one search away.
This is the part of the method I want to make visible. The scan contained the answer the whole time, but the answer was not in any single layer of the file. The texture atlas misled. The mesh alone was just a bounding box. Only when the two were combined — only when I rendered the textured geometry and looked at it as an object — did it become a chair, and only then did the chair become identifiable as a member of a known practice. There is a general lesson here about 3D data: the UV atlas lies to flat readers, the mesh without texture is a silhouette, and the combination is the artifact. Any one of the three (mesh, texture, atlas) read alone will mislead. All three together are the object.
This is also why the scan needs to be embedded in this entry and not just linked. A still image of the chair would be a flatter reading than the reader can now perform by rotating the model. The reader gets to make the same turn I did — from “what is this?” to “it’s a chair” to “it’s a Bally.” The scan is the object being read, and the rotation is the act of reading.
About the object itself, briefly, since it has earned it. A chair made from a decommissioned street sign is a specific kind of object: it is an artifact of public infrastructure turned into an artifact of private domesticity. The material was once on a street corner telling drivers what the street was called. It is now in someone’s home, holding someone up. That is a real translation. It preserves the sign’s text, which means it preserves a piece of a specific street in a specific city. It also preserves the sign’s weathering, its bolt-holes, its accumulated dings. The chair is the sign’s retirement, not its erasure.
This is, more than almost any other object in the series so far, an exemplum of what the series is about. A system (DOT sign fabrication) makes an object. Another system (one person with a bending brake and a mailing list asking for corks) turns that object into another object. A third system (photogrammetry running on a phone) turns that second object into a digital record of itself. And at every stage, something is preserved and something is lost. The “WATER” is still legible. The fasteners are not. The street the sign came from is not named in any of the files. But the object is here, rotatable in your browser, because it passed through all three generators and came out in a format we can look at together.
addendum — the scan, re-run
Posted later the same day as the entry above.
The scan at the top of this entry was captured on a phone. Apple’s iOS Object Capture API exposes a ceiling detail tier of .reduced; the higher tiers — .medium, .full, .raw — are reachable only from the macOS side of the same pipeline. After the entry was posted, the original photograph set was handed to a Mac and re-solved at the top tier8. The file below is the result. Two settings changed between the two exports: detail rose four steps to .raw, and the pipeline’s object-masking parameter was flipped on, which asks the segmentation model to separate the subject from the room before the mesh is reconstructed.
The differences are specific.
The mesh is four times denser: 28,110 triangles on 14,055 vertices, against the first export’s 6,842 on 3,413. The rivnut flanges and screw-head domes that the first entry had to describe as painted dots on a baked texture are now three-dimensional relief on the mesh itself. Relief has moved from the skin of the scan into its bone. This reprocessed export ships only one texture map — a single 2048×2048 color sheet, no normals, no ambient occlusion — because the geometry is now carrying what the texture maps were carrying before. At lower detail, hardware had to be faked onto the surface. At this detail, the surface holds the hardware directly.
The amorphous gray mass behind and beneath the seat is gone. What the earlier reading named as a reconstruction artifact was, more precisely, an unmasked environment inflation: the algorithm had been handed the whole photograph and was filling the chair’s negative space with whatever else was in the frame. With object masking on, non-subject volume is removed before the mesh is solved. The correction to the earlier entry is not to the observation — the gray volume was not part of the chair — but to the cause. The gray was a default, not a limit. It was a knob, set wrong.
The frame has also changed. The capture that produced this file was labeled “Boris chair and soft ottoman,” and the bounding box reflects that: where the first export was 0.41 meters wide, this one is 0.82 meters wide, with the same height and the same depth. Rotate the model and a second form appears beside the chair — a low, upholstered saddle, matte and textile rather than sheet aluminum, with no lettering on it and no rivnut hardware. It is a paired scene, not a solitary object. The earlier sentence — “the chair is the only object in the frame” — was true of the earlier scan and is not true of this one.
None of the earlier reading is retracted. What the first entry said about the chair, about the Bally lineage, about Humanufactured and the rivnut-and-screw detail, is unchanged. What the second scan contributes is a note about the process the first scan was a result of: a scan is not a fixed document. It is a setting on a function, and the function has knobs. Turn the detail knob up four stops and the fasteners resolve. Turn the masking knob on and the room drops out. Widen the frame and a companion object joins the subject. None of that is visible in any single file.
Below, the reprocessed model. Rotate it the way you rotated the first. The chair is the same chair. The scan is not the same scan.
.raw
28,110 triangles · 14,055 vertices
object masking on
tap AR on iPhone / Android