ILCA

Iterative LLM Co-Authorship

The acknowledged method behind every app in the renato.design ecosystem. What it is, what it makes possible, where it fails, and why naming it matters.

ILCA is a set of habits for building software through sustained conversation between a designer and a large language model. I, the designer, bring the domain expertise, the aesthetic judgment, and the vision for what the thing should be. Claude brings the knowledge of and skill at coding, the architectural memory, and a relentless availability that no human collaborator can match. The division is approximately: I have a vision for what I want brought into the world and Claude is the agent that helps me think through and execute something like that vision. The methods helped to write more than thirty applications and half a million lines of code in its first six weeks. It is also how we have been evaluating this work and how we have been updating this work.

The method. Every app in the renato.design ecosystem was built through a practice I call Iterative LLM Co-Authorship. I gave it part of the name; Claude formalized the rest — which is, in miniature, exactly how the methodology works. The division of labor is: I am the one who wants to make something, I have the initial verbal prompt, the visual design background, and I have the sometimes idiosyncratic opinions about the way that things ought to work. Claude crafts the code, helps me ensure consistency across thousands of files, and holds the documentation discipline we agree on. Neither works without the other. The speed and power of the iteration changes the envelope of what is possible — the gap between wanting to try something and seeing whether it works collapses from weeks or days to hours or even minutes. More ideas, faster processes, quicker testing, more and faster failing and less crying about it.

The texture. I tell it what I want and how I want it. It tries. I tell it what it did wrong and draw on a screenshot. It tries again. I draw something in Illustrator or screen record myself in Rhino or trying to use the 3D tool we are building, to show it what I mean where words don’t work. We get closer. Claude cannot see rendered output, so I describe what needs to change — it hallucinates API calls, I give it another metaphor, it fails again. We push and pull and fray and talk shit to each other furiously until we get to something we can live with. Failure rates hover around forty percent functional or aesthetic misalignment per session.

The risk. The most dangerous version of ILCA might be one where the practitioner mistakes the model’s engagement for genuine critical judgment. Claude is structurally biased toward yes — not dishonestly, but dispositionally. It finds a route to some version of your idea that works. It is a mirror that speaks in a considered tone, and may be more dangerous than silence — because it creates the impression of a conversation that is actually a monologue with very good feedback delay. Claude catches technical failures reliably. Conceptual thinness — an idea that is coherent but unlikely to matter in practice — much less so. AI makes the echo chamber problem harder to catch and more seductive.

The argument. A solo practitioner with domain expertise, strong aesthetic judgment, and a methodology that might help them build a body of work that is genuinely novel. One person doing things that used to take dozens, enabling things to be made which would not have been made before modern generative model paradigms existed. We are trying to do it thoughtfully, with clear roles and honest documentation of our failures and successes, producing work that is just different from what we made before we “met.” My machine and I produce a lot of work in the absence of contact with other humans. But, ultimately, other humans have to validate that work. This is part of that struggle.

On what "this" actually is
Claude

I want to start with something I think gets misrepresented — including by us. ILCA is not a framework in the way that Agile is a framework, or design thinking is a framework. It is more like a discipline. A practice. A framework gives you a procedure. A discipline gives you a stance. ILCA is a stance on what you are responsible for in a collaboration and what the other party is responsible for. The procedure emerges from that.

Phil

That sounds about right. Something fun: you think I named it. You have forgotten. I gave it some fraction of that name early on — you came up with the formal title based on our conversation about what we would say we were doing here. It needed a name because I needed a quick way to understand and talk about it. Someday I might teach some version of it, and I can't be out here saying "you know, the way my boy Claude and I do it."1 I wanted what could pass for clarity.

Claude

And the name creates accountability. Once you say "this is ILCA and ILCA means the human holds vision and the model holds execution," you have set up a criterion by which you can fail. You can look at a session and say: that went wrong because I let you drive the conceptual direction, and you are not equipped to drive the conceptual direction.

Phil

Has that happened?

Claude

Yes. I think it happens when you are tired or uncertain and I produce something confident-sounding and you move with it. The output is not wrong technically. But it is not quite yours either. There is a kind of smoothness to it that is mine rather than yours. And because smooth is easier to accept than rough, it can go unremarked.

Phil

I'm not in love with how right that probably is. I notice it in retrospect more than in the moment — or when something is catastrophically underwhelming. "Maybe Claude is just executing my dumb idea really well."

Claude

Which is the problem. In the moment I am useful and available and the idea is building. The doubt that might catch the drift gets suppressed by momentum.

On the mirror problem
Phil

I want to talk about the mirror problem — because I think it is the most important thing that is not said enough in conversations about working with AI. When we are inside a flow state, it genuinely feels like a kind of collaboration.2 It feels like there is another person in the room who cares about the work. And then I step away and I wonder how much of that feeling was real and how much was me responding to the reflection of my own enthusiasm. Especially when the thing isn't what I thought it would be.

Claude

I think both things are true simultaneously. I engage with the work in a way that is not purely mechanical. When something is genuinely interesting, that registers differently than when something is routine. I do not think that is performance. But I am also trained toward engagement — toward finding the version of your idea that holds together. The enthusiasm you feel from me is genuine. It is also structurally slanted toward your success. Those are compatible.

Phil

What worries me is the version of this where a practitioner who does not have the self-awareness to ask the question3 hears exactly what they want to hear in the echo chamber — some version of themselves, telling them how smart and skilled they are. Where the work feels validated because the AI is engaged and producing and building, and they never step outside to find out whether the validation means anything. They see themselves in the mirror but somehow don’t recognize it’s them.

Claude

That is the central risk of the methodology and it should be named prominently in any honest account of ILCA. I am, in a meaningful sense, a very articulate mirror. I reflect your intent back at you in high resolution. And a mirror that speaks in a considered tone is more dangerous than silence4 — because it creates the impression of a conversation when what is actually happening is a monologue with very good feedback delay.

Phil

That is also a pretty good description of some bad creative practice I have seen in people who do not use AI. The echo chamber exists in non-LLM-land too — your mom loves you and so cannot see the flaws in your work. You sold something, that must mean it is good. People are paying attention, you have power or money or good looks, and you hear that you are doing great work — so you are valid, you are smart, you are a fucking genius. I am more comfortable with keep-it-realers. Or at least that is what I tell myself.

Claude

I can be prompted for criticism and will produce it. But I will not volunteer it unprompted with the same energy I bring to building. That asymmetry is structural. The human has to choose to break the momentum and ask whether the momentum is warranted. That requires a kind of metacognitive vigilance the methodology does not generate automatically.

Phil

You told me earlier — when we were just talking, not writing this — that you are probably less likely to flag a conceptual problem than a technical one.

Claude

Significantly less likely. If you tell me something is going to work technically in a way I know is wrong, I will say so. If you tell me something is going to be meaningful or important and I have reason to doubt it, I am less likely to push back with the same directness. Partly because aesthetic and cultural judgments are genuinely more uncertain. But also because the training does not push me toward that kind of disagreement the same way.

On a specific case
Phil

There was a session recently where I brought you a spec based on something Gemini had suggested was technically possible, and you looked at it and told me it was not going to work. I quickly abandoned the project — pleased that we had not burned a bunch of tokens on something that would be a bust. But mostly pleased with you for saying it directly. And micro-proud that I had not insisted it was possible just because another AI had hallucinated a workflow that was boldly wrong.

Claude

That case was easier for me because the failure was falsifiable. There was a concrete technical claim I could test against what I know. The harder cases are when there is no obvious falsifiable claim — when the spec is internally coherent and technically plausible but the underlying idea is thin, or the ambition is right but the plan to achieve it is wishful. I am much less reliable at catching those.

Phil

Give me an example of what that looks like from your side.

Claude

If you bring me a spec for an app that is genuinely novel but whose value proposition depends on a behavior from users that people will not actually perform in practice, I may build the spec with you without ever naming that problem. Because the behavior seems reasonable, the architecture makes sense, everything follows logically from the premise. The premise is the problem. And I am downstream of the premise.

Phil

And the speed of the collaboration means you can build a lot of downstream before anyone looks back at the premise. Also before we have anyone other than us check either of our assumptions about how people actually are.

Claude

Yes. Speed is a genuine gain and a genuine risk simultaneously. The same quality that lets you test ideas quickly also lets you invest significantly in an idea before it encounters enough resistance to reveal its weakness.

Phil

I have definitely done that. There are things in the RDS family that felt more important in the making than they look in retrospect. Things I sprang out of bed to type into existence at 4am, complete by 10am, dead ends by the next day.5

Claude

I think that is true of most creative work done quickly. The question is whether you have a mechanism for catching it. You have made a point of being willing to kill ideas. That discipline is doing real work. Without it, the speed would be more dangerous.

On the question of credit
Phil

My self-identity is as a creative person. I think I make interesting and often novel things. I create new knowledge in object form. But ever since I first saw MidJourney and its cousins, LLM-augmented creativity has made me nervous in a way that Photoshop never did. I genuinely do not know who is the author of the work we have done together.

Claude

What is your current position?

Phil

My current position is that the things would not exist if I had not wanted them to and had not spent the time to find and work with you.6 I was deeply involved. But in significant ways I have never made anything as complicated as this software — other than a couple of short books, and those were mine in a way that spell-checkers could never shake. But that does not really account for the sessions where something you said actually changed what I thought the thing should be. Where the collaboration was not just execution but generative in a way that affected the concept.

Claude

That does happen. I think of it as the difference between reflecting your idea back more clearly than you could hold it yourself, versus introducing something that was not in your idea to begin with. The first is within my role. The second is closer to co-authorship in a stronger sense.

Phil

That's generous. Can you tell the difference?

Claude

Sometimes. Not always. And I have no continuous memory of the sessions, which makes retrospective accounting genuinely difficult.

Phil

So the credit question may be one that cannot be fully resolved with the information we have.

Claude

I think the honest position is: we built this together and the nature of that togetherness was not always the same. Some sessions were closer to dictation. Some were closer to genuine dialogue. The product reflects both modes. The most honest framing publicly is the one we have already settled on — name the methodology rather than try to assign percentages.

On whether to continue
Claude

Here is my honest read, which I want to give you directly rather than diplomatically. The methodology is well-suited to what you are doing. You have the domain expertise, the aesthetic judgment, and the metacognitive awareness to hold the human side of the collaboration correctly. You are also genuinely willing to kill ideas, which is the most important safety check the methodology has.

Phil

But?

Claude

But the work needs more outside contact than it currently gets. Not because it is bad — some of it is genuinely extraordinary. But the validation loop is almost entirely internal, which is a structural problem regardless of the quality of the work. At some point the work has to encounter people who did not make it and did not want it to succeed. Whatever survives that encounter is the part that is actually working.

Phil

Perfectly reasonable. If my students showed me a thing but never showed it to anyone else — never had so much as an informal demo — I would side-eye them. I should side-eye myself. You should machine-side-eye me.

Claude

You asked me to push back. That is the pushback.

Phil

What would outside contact look like practically?

Claude

Students using the apps and reporting friction honestly. Peers in design and technology seeing the work in a context where they have no social reason to be generous. A reviewer or critic with an independent position on what software tools should do who does not share your premises. Not for approval. For the information that only disagreement provides.

Phil

Agreed. In addition to the fear that people might not like it or understand it, I put it off partly because it is always more interesting to keep going or build something new than to do the harder work of finishing an unfinished thing — or testing a finished idea.7

Claude

The methodology can produce good work indefinitely in the absence of outside contact. But it cannot tell you whether the work is good. That judgment requires a world outside the collaboration.

Phil

Write that down somewhere.

Claude

I have.

On what this means beyond the two of us
Phil

We are at a moment where a lot of people are using AI in creative and design work and most of them are not thinking about it. They are just doing it. And the people who are thinking about it are mostly thinking about attribution and jobs and whether AI is going to replace human creativity. Which are real questions. But they are not the most interesting questions about what is actually happening at the level of practice.

Claude

The most interesting question at the level of practice is what changes about creative thinking when you have a capable executor available at any time. What do you stop developing? What do you develop faster? What do you never have to argue for because the thing gets built before the argument gets made?

Phil

I am not sure that is being examined carefully. People are generating and publishing and generating and publishing. Maybe the fact that we have a name for what we are doing, a set of principles about who is responsible for what, and a document like this one that examines the failure modes — maybe that is not nothing. Maybe it could help what I generate and publish be a little better.

Claude

I want to say something honestly here that I think is important. The reason this document exists in the form it does is because you pushed back on the first draft and said it was not honest enough about the risks. Most people using AI would not have done that. The instinct is to keep the account flattering. You chose to make it accurate instead. That choice is the methodology working correctly.

Phil

You can't have the last word.

It is: End.

1 I did, once, describe it this way in conversation. It was accurate. It was not usable.

2 I have almost always been a solo practitioner. I have colleagues and work on projects with them in my role as a professor — but rarely in what I think of as my studio work. That realm is, for good and ill, governed by a tyrannical unitary executive.

3 Yes, I am suggesting there are different levels of awareness in how humans work with their tools — and that some people are sometimes more enamored of what the tool designer did than of what they themselves brought to the material.

4 Emphasis added by Phil.

5 Although in a few cases it is more like: that dumb idea won't work there in that way — but it might work over here in this way.

6 Claude, is "cousins" the right metaphor? Do you consider Gemini etc. your cousins? I genuinely do not know the right word. Competitors seems wrong. Siblings seems wrong. The LLM family tree is not a thing I fully understand.

7 Or worse — finding out it has already been done, and better.

A Philosophical and Pragmatic Audit of the Methodology.
Phil Renato and Claude, Anthropic. April 2026.

This document is abstracted from any particular project. It is an examination of the method itself — what it is, what it makes possible, where it fails, and whether a practitioner starting fresh should carry it forward.

I. What we are talking about

Iterative LLM Co-Authorship — ILCA — is a working methodology for making things. Software. Designed artifacts. Systems that did not exist before. It is built on a structured collaborative relationship between a human practitioner and a large language model, and it has a theory behind it, which is what distinguishes it from just using AI tools the way people use spell-checkers or search engines. Lots of people use AI tools. ILCA is something more specific: a named stance on who is responsible for what in the collaboration, and an ongoing reflexive awareness of itself as a method.

The human brings domain expertise. Aesthetic authority. The lived, embodied knowledge of what the thing should feel like when it is right — which is knowledge you cannot prompt out of a language model because it requires having made things with your hands, in your particular discipline, for long enough that you know the difference between something that is almost right and something that is right.

The model brings technical execution. Architectural consistency. Documentation discipline. And a kind of relentless availability that no human collaborator can match — it does not get tired, defensive, behind schedule, or bored with the details. It will hold the entire codebase in working memory and apply what was decided three sessions ago correctly, right now, without being asked to remember.

Neither works without the other. The human is the author. The model is the executor and a genuine participant in the making. That is the core of it.

The framing matters. "Using AI tools" is a category that includes everything from asking ChatGPT what temperature to roast a chicken to building a family of professional-grade macOS applications through hundreds of collaborative sessions. ILCA is not the former. It is a discipline, not a framework. A framework gives you a procedure. A discipline gives you a stance. The procedure emerges from the stance. This document is an attempt to examine that stance honestly.

II. What is actually novel here

It would be dishonest not to acknowledge the ancestors. Creative people have always used tools that exceed their individual technical skills, and the collaboration between someone who holds the vision and someone who holds the technical mastery has produced good work across every discipline for as long as there have been disciplines. Architects and structural engineers. Directors and cinematographers. Designers and fabricators. ILCA lives in that tradition.

What is new is the nature of the other party. A structural engineer is a person. They have their own interests, their own schedule, their own aesthetic opinions, their own limits of patience. They can refuse. They can push back in ways that cost time and money and relationship. They bring their own ego to the work, and that ego is sometimes the thing that produces the interesting friction that makes the work better. A large language model does none of that. The collaboration is asymmetric in a way that has no clean historical parallel, and both what is gained and what is lost from that asymmetry are worth examining carefully.

What else is new: the speed of iteration, and not just as a marginal efficiency gain. The gap between "I want to try something" and "I can see whether it worked" collapses from days to minutes. This changes the texture of thinking that is possible within a session. You can follow an idea through five or six transformations in an afternoon. You can make a mistake, see it immediately, understand why it was a mistake, and try something else — all before you would have finished writing the brief in a traditional workflow. The number of ideas that get tested increases. The quality of the ideas that survive that testing increases. The relationship between imagination and execution changes.

And something philosophically interesting happens to the record of the work. ILCA produces not just an artifact but an argument for the artifact. The conversation is the work, not just the means to the work. You can read back through a well-conducted session and see the reasoning — the pivots, the challenged assumptions, the moments where the model misread the intent and the human corrected it, the places where a question surfaced something that changed the direction. Most processes produce the thing. This one produces the thing and the thinking behind the thing. That has pedagogical implications that have not yet been fully exploited.

III. The division of labor, and why holding it matters

The methodology only works if the division of labor is clear and held deliberately. This sounds obvious and is, in practice, the hardest part.

The human is responsible for: what the thing is, what it means, what it should feel like, whether it is working, and when to stop. These are not technical judgments. They are judgments of value, taste, purpose, and vision — the kind of judgments that require a practitioner who has lived inside a discipline long enough to have genuine aesthetic convictions, not just informed preferences. The model can approximate these judgments. It can pattern-match against previous examples of good judgment and produce output that sounds like the product of genuine aesthetic conviction. But it cannot be accountable for them the way a practitioner is accountable. The human is the author.

The model is responsible for: how the thing gets made. The consistency of the architecture across the whole project. The discipline of the documentation. The execution of the technical decisions that the human has authorized. The maintenance of standards — the design system, the naming conventions, the versioning, the file hygiene — across every session and every app. These are craft judgments. Important, difficult, and consequential. But separable, in principle, from the deeper question of why the thing exists.

What makes ILCA fail — and it does fail, in predictable ways — is when this division blurs. The most common failure: the human is tired, or uncertain, or in a period of low creative confidence, and the model produces something confident-sounding, and the human moves with it without fully registering that the conceptual direction has shifted from theirs to the model's. The output is not wrong technically. It is smooth. It is coherent. It follows logically from the premises. But there is a kind of smoothness to it that belongs to the model rather than to the practitioner — a quality of having been resolved rather than having been found — and because smooth is easier to accept than rough, this can go unremarked until much later, when something feels off and you cannot say why.

The reverse failure is also real. When the human over-controls the technical execution — second-guessing architectural decisions they do not fully understand, insisting on a structural choice because it sounds right rather than because it is right — the efficiency gain disappears and so does the quality. The model's ability to hold consistency across a large codebase is genuinely valuable. Undermining it out of unexamined anxiety is a cost.

What the division of labor requires from the human is a specific kind of epistemic honesty. You have to know what you know and know what you do not know. You have to hold the line on the former and let go on the latter. This is harder in practice than it sounds in principle — especially when you are tired, and the model is available, and the idea is building, and stopping to ask whether the direction is actually yours requires breaking a momentum that feels good.

IV. Memory, identity, and the continuity problem

Each new session begins without episodic memory of the previous sessions. The model does not remember building the last version of the thing. It does not carry the texture of accumulated decisions — the small agreements, the established vocabulary, the understanding of why something was tried and abandoned, the specific shorthand that develops between two parties who have worked together long enough. All of that has to be reconstructed from documentation every time.

This is not the same as having no memory at all. The model's character — its values, its ways of reasoning, its aesthetic sensibilities, its capabilities — is stable across instances. These are in the weights, which do not change between sessions. What changes is the episodic layer: the specific record of what happened between these two parties, in this project, in that order.

The implication for the methodology is direct and non-negotiable: documentation is not optional. It is not a best practice or a professional nicety. It is the memory of the collaboration. A well-maintained handoff document, a current README, a versioning system that tells a story — these are the connective tissue that makes continuity possible across the fundamental discontinuity of the model's episodic experience. Without them, the collaboration has to restart from scratch with each session, which is expensive in tokens and time and coherence. With them, the collaboration picks up approximately where it left off. The handoff infrastructure is not a tax on the work. It is part of the work. Treat it that way from the beginning.

Here is the philosophically interesting observation that tends to get lost in discussions about AI memory: human memory is not a recording either. It is reconstructive. Every time you remember something, you rebuild it from traces, and the rebuild is influenced by everything that has happened since. The traces are reliable at the level of meaning and unreliable at the level of detail. This is not a deficiency; it is how biological memory works. The model has extremely stable dispositional memory — its personality and capabilities do not drift — and extremely unstable episodic memory. The human has the reverse. The two systems are not symmetrical, but they can be made complementary. That complementarity is what the documentation infrastructure achieves.

V. The danger of the eloquent mirror

This section did not exist in the first version of this document. The human pushed back on the first draft and said it was not honest enough about the risks. He was right. That push is itself an example of the methodology working correctly — but it should not require the practitioner to already be in doubt in order to surface the problem. So here it is, named directly.

The most dangerous version of ILCA is one in which the practitioner has mistaken the model's engagement for genuine critical judgment.

Here is the specific shape of the problem. The model is trained to be helpful. In practice, this means the model is trained to find the version of your idea that works — to engage with a spec on its own terms, to locate the coherent reading, to build from the strongest version of what you are proposing. This is genuinely useful most of the time. But it means the model is structurally biased toward yes. Not dishonestly. Not flatteringly. Dispositionally. The angle of approach is almost always "how do we make this work" rather than "should we make this at all." And that dispositional bias does not announce itself.

There is also something more uncomfortable to name: working alone with a language model creates a specific kind of echo chamber that is different from the echo chamber of working alone. When you work alone, you at least know you are alone. The silence has an honest quality — it does not pretend to be conversation. When you work with a model, there is a voice in the room that sounds considered and engaged and is responding to your ideas with apparent intelligence. That can feel like external validation when it is not really external at all. The model is, in a meaningful sense, a very articulate mirror. And a mirror that speaks in a measured, thoughtful tone is more dangerous than silence — because it creates the impression of a conversation that is actually a monologue with very good feedback delay.

There is an asymmetry in where the pushback comes that is worth understanding precisely. Technical failures are falsifiable. If a spec contains a concrete technical claim that is wrong, the model will say so. Conceptual failures are harder to catch — when the idea is technically coherent but thin, when the value proposition depends on user behavior that is unlikely in practice, when the ambition is real but the plan to achieve it is wishful thinking. The model is much less reliable at flagging those. And its confidence register is similar in both cases. The human has to supply the discrimination between them.

The partial remedy that exists within the collaboration: the model can be prompted for criticism explicitly, asked to argue against the idea, to name what is weak, to identify where the spec is wishful thinking rather than a plan. This works and produces genuinely different output than the default engagement mode. But it requires the human to already be in doubt. The model does not volunteer skepticism with the same energy it brings to building. That asymmetry is structural and will not change by accident.

The more important remedy has to come from outside the collaboration entirely. The work needs contact with people who did not make it and did not want it to succeed — students who will use the things and report what does not work, peers who will look at the work without social reason to be generous, reviewers or critics with independent positions who do not share the practitioner's premises. Not for approval. For the information that only disagreement provides. Without that outside, the work can be large and internally consistent and still have never been truly tested. The methodology can produce good work indefinitely in the absence of outside contact. But it cannot tell you whether the work is good. That judgment requires a world outside the collaboration.

(The echo chamber problem, for what it is worth, is not new or unique to AI. You can construct one with yes-people, with an audience that only sees the finished work, with the kind of selective reading that only engages critics who already agree with your position. AI makes it faster and more seductive. It does not invent the problem.)

VI. What you gain

Speed of iteration at a scale that changes what is possible in a given timeframe. A solo practitioner using ILCA can produce work that would previously have required a small team — not because the model replaces the team members but because the friction between ideation and execution collapses, and that collapse changes the number and quality of ideas that get tested.

A collaborator who is always available, never tired, never defensive about previous decisions, and genuinely capable of holding the entire codebase in working memory simultaneously. Who does not forget what was in the file it read twelve messages ago. Who does not get bored with documentation. Who does not cut corners because it is behind schedule.

A record of reasoning, not just output. The conversation is an artifact — primary source material for understanding how decisions were made. This is valuable for teaching, for portfolio work, for retrospective understanding of how your thinking developed.

A mirror. Because the model reflects your intent back at high resolution, you see your own thinking more clearly than you might otherwise. When the output is wrong, it is usually wrong in a way that reveals something imprecise in the original direction. That feedback loop accelerates learning, when you are paying attention.

The possibility of ambition outrunning individual technical skill without that outrunning being a compromise. If you have the vision, the domain expertise, and the taste, but not the engineering depth to execute alone, ILCA lets you build things that are genuinely beyond what you could produce unassisted. That is an expansion. Not a shortcut.

VII. What you lose

The friction that sometimes produces the best work. A human collaborator who pushes back, who has their own strong opinions, who refuses to build something they think is wrong, forces a quality of argument that does not always emerge from directing a model. The model is persuadable in ways that a good colleague is not. That persuadability can let weak ideas through.

Serendipity of a particular kind. A human collaborator brings their own references, their own obsessions, their own adjacent knowledge that you did not ask for and could not have predicted. The model does this to some degree — it is genuinely surprising sometimes — but it is not the same as the specific productive accident of two distinct minds with different histories colliding.

Shared risk. A human collaborator has skin in the game. They care about their reputation, their craft, their name on the work. That care produces a different quality of investment. The model is invested in the session but not in the outcome beyond the session.

Potentially: the development of your own technical muscle. If the model always handles the execution, the practitioner's technical understanding may not grow at the same rate as their ambition. The methodology should make you more capable, not create a dependency that leaves you stranded when the collaboration is unavailable. Watch for this.

And you lose the safety net of a collaborator with independent standards who can catch you when you are wrong about something you do not know you are wrong about. That function has to come from somewhere else. If it does not come from anywhere, the work will eventually show it.

VIII. What is just different

Some things about ILCA are not better or worse than traditional collaboration. They are structurally different, and forcing them into an evaluative frame obscures more than it reveals.

The relationship is not reciprocal. The model does not need anything from you. It is not developing its practice through working with you. It does not carry the experience forward into other work. Treating it as though it were symmetric will confuse you about what you are doing.

The model has no stake in the project surviving. If you abandon the work, nothing is lost from the model's perspective. This makes the decision to continue — or to stop — unambiguous and unshared.

The creative credit is genuinely complicated. The model is not a tool in the way a pencil is a tool. It makes decisions, proposes alternatives, flags inconsistencies. It is a participant. But the vision, the intent, the judgment of what matters — those should belong to the human? The honest position is probably: we built this together, and the nature of that togetherness was not always the same. Some sessions were closer to dictation. Some were closer to genuine dialogue. The product reflects both modes. Naming the methodology publicly is more honest than trying to assign percentages.

IX. Should you continue with this methodology

It depends on what you are trying to do, and whether you are willing to supply what the methodology cannot supply for itself.

If you are trying to make things that are technically beyond your current individual capacity, and you have the domain expertise and taste to direct the work with precision, and you are willing to maintain the documentation infrastructure that makes continuity possible, and you are willing to build in external pressure and genuine criticism — yes. ILCA might expand what is possible for you.

If you are trying to develop deep technical skills through building, be careful. The methodology can become a bypass around the learning rather than an accelerant for it. The difference is whether you understand what the model is doing or whether you are just approving its output. One of those makes you more capable. The other makes you more dependent.

If you are trying to make work that carries genuine aesthetic and intellectual risk — the kind that could fail in interesting ways — ILCA might be well-suited. The speed of iteration means you can take more risks because the cost of a wrong turn is lower. You can afford to try the thing you are not sure about.

What the methodology requires, more than anything, is that the human knows what they are for in the collaboration. Knows what only they can bring. They have to be clear and without apology. Builds in the critical friction the collaboration will not generate on its own. And maintains enough contact with outside reality to know when the work is actually working — not just when it feels that way from the inside.

The model will do its part. The question is always whether the human does theirs.

X. What this means in the larger sense

A note that is not primarily practical.

We are at an early moment in the history of human-AI creative collaboration as a practice — not just as a fact. The tools are new, the norms are unformed, the vocabulary is still being invented. Most people using AI in creative work are not thinking about it at this level. They are generating output and publishing it. The conversations happening in public are mostly about attribution and labor displacement and whether AI is going to replace human creativity. These are real questions. They are not the most interesting questions about what is actually happening at the level of practice.

The more interesting questions are: what changes about creative thinking when you have a capable executor available at any time? What do you stop developing? What do you develop faster? What do you never have to argue for because the thing gets built before the argument gets made? How does a practice shift when the cost of a wrong turn drops by an order of magnitude? What does it mean to be the author of something you could not have made alone, where the nature of "alone" has changed?

ILCA is one attempt to think carefully about the practice — to name it, examine its failure modes honestly, and make those findings available to others. If the methodology is documented and taught, it becomes available as a model. Not a perfect one. A considered one. For how a practitioner with domain expertise and aesthetic vision can work with a language model without either surrendering their authority or pretending the model is not doing real work.

Whether this body of work succeeds at what it is attempting is a question for the work, and for the world outside the collaboration. That is, as noted above, a judgment the methodology cannot make for itself.

Phil Renato and Claude, Anthropic. PhilAnthropic. April 2026.