on anthropomorphization of AI & agents
TLDR: I’m torn.
Since before AI was synonymous with modern language models I considered the term “artificial intelligence”, alongside persistent anthropomorphization of the technology, a major hindrance to its efficient use. Yet trying to change vernacular is as fruitful as fighting the wind.
Modern “AI agents” do seriously beg the question, both on a meta-phyiscal but (more important to me) practical & technical level – do agents work better if treated like humans? Should I spinup a team of agents with roles like CTO, engineering manager, SWE, project manager, product manager, designer, and more to accomplish tasks? Or is this wasteful and juvenile in our eventual efficient use of this technology?
I can’t tell. Intuition tells me that leaning into existing constructs present in the training data – “act like a founder & CTO, own the outcome, …” – may yield good results. But intuition also tells me this is silly and there’s an order of magnitude more efficient solution. We’ll see!