who_owns_your_code

When one relies entirely on a large language model to write code, who really owns the outcome?
#AI#LLM#coding

I'm seeing a lot of hype around platforms that promise to "teach you to code with AI" recently.

It's sparked a question in my mind - if you write code and create a project, but you cannot understand, maintain, update, modify, or fix the code you've written, then have you really written the code? Or, have you built a dependency on a synthetic text extruder to transform your spoken words into something resembling a techincal solution?

The difference between the two is subtle, but vast.

It's the difference between knowing how to architect a large construction project, and operating an excavator.

It's the difference between knowing how to combine flavours and textures to create a new dining experience, and following a recipe to create a dish.

The former requires a deep and profound understanding of the reasons why decisions are made, the implications of each decision made at the design stage, and how these various decisions will interact with one another. It means thinking through the logic of the specific challenge one is solving, the various permutations in which it may present, and how different approaches may be optimized..

The latter requires the ability to operate a complex tool, and to follow a set of steps to complete an objective.

It is possible to build things with a chatbot. However, I don't believe it's possible to build the thing with the same degree of understanding and ownership when one is reliant on an LLM to transform prompts into code. If you rely on an LLM, then I don't believe you own the code you've written - you own an obligation to the LLM provider to continue using their product.

That's a slippery slope that may not be obvious until you're already sliding down it, unable to self-arrest.

Be wary, it's over-hyped out there.