Role of AI in my writing
I have a confession to make to anyone who has stumbled across this blog. Some of the posts I’ve previously written had strong AI generated segments. I’ve since added a note at the start to mention this. The post ideas were always mine, and I always edited the outputs. However, there were still occasions where AI generated segments made it into my final draft unedited. I’m currently rethinking the role of AI in my workflow.
I fell for the siren’s song of low effort output
I’ve been wanting to output more content. Main reason is that a lot of people I respect have blogs that span years. I also wanted to share some perspectives I’ve had and issues I’ve resolved in case anyone was ever looking/to keep them as a log. The blog post about CUDA on void linux has been particularly helpful as I’ve set up multiple void linux machines that all had the same issues.
The issue for me was that creating content takes away from other things I could’ve been working on. I thought that this was the point of AI acceleration so I leveraged ChatGPT to help automate the generation of some posts. In most cases I went in and manually edited for style and formatting, but there are many examples of ChatGPT generated sentences that made it to my blog. This helped me to get useful (at least to me) posts out, but in hindsight I don’t think the trade-offs were worth it. What I gained in posting velocity I lost in:
- Posts are not authentic. While I always picked the post topics (No “what should I post” prompts), it’s hard to separate what thoughts are genuinely mine versus what is just background noise added by the LLM. The best example of what I mean comes from image generation models. The tree in a human created painting was placed for a reason and has some kind of meaning. The tree in an image generation model created painting was placed there just because and has no meaning. I kind of feel the same way about LLM generated text.
- I miss on developing my writing skills. I personally think this is the most important reason to not rely on LLMs for writing. I’ve written a lot of academic/technical documents (see publications) but that kind of writing is fake/performative. I’ve been trying to embrace Paul Graham’s write like you talk. Outsourcing to LLMs as a first step means adopting the interpolated average writing style instead of finding my own unique voice and style.
Seeing the light and fighting the slop
What helped me see the light was Mitchell Hashimoto’s writings. In particular his article on vibing a ghostty feature. In it, I found a workflow that made more sense to me. I think I’ve been on the internet too much and was trying to convince myself Gas Town and endless Ralph Loops were actually the future. I haven’t convinced myself yet, and I think I will stop trying to. Instead, I’m going to be more intentional about doing things manually for both the love of it and the skill-set growth potential.
This does not mean I will go back to the “halcyon” pre-AI workflows. LLMs have been transformative for me. I just plan to use them as tools where they belong and not buy the developer –> agent manager transition that is being sold. The exact role of agents in my work is still something that is evolving.
Commitment to ensure all writings on this blog are fully human generated
Moving forward anything I write on this blog will be fully human generated, with some assistance from LLMs. For writing that means using it as an editor to help with tone, structure, and grammatical errors. In general, I also use it as a sounding board to help me develop ideas. Claude-Code and Codex still have a big role to play in automating certain tasks (tedious edits, research on specific topics, even code changes once I know exactly what I want) but I no longer think they should have a primary role in something as personal as publishing my own thoughts.
Will close with the quote that really hit me from Mitchell Hashimoto:
I believe good AI drivers are experts in their domains and utilize AI as an assistant, not a replacement.