Just fucking dumb

One of the AI defenders on substack is playing Altman’s game by planting doubt and confusion in our minds. If we believe this crap we’ll be less inclined to BLAME THE MAKER OF THE TOOL.

= = = = = START IDIOCY:

When an AI agent causes harm, who’s responsible? The developer, the deployer, the user — or the agent itself?

Each answer requires first answering a harder question: what kind of thing is this? The available categories don’t fit.

If it’s a tool, product liability applies. But tools don’t interpret instructions, adapt to circumstances, or contact emergency services on their own initiative.

= = = = = END IDIOCY.

Sheer nonsense. It’s true that simple tools like hammers don’t interpret, adapt or notify. Complex mechanical tools were doing all of those things LONG before electronic computers, let alone LLMs. Railroad signaling systems in the 1880s, a vast electro-mechnical web, were able to alert the emergency office. Western Union and Bell networks and electric power grids in the 1920s adapted to circumstances and alerted their emergency office when things went wrong.

LLMs understand and speak language in a far more sophisticated way than railroad signals, but they still do the same basic functions. And they turn in a LOT more false alarms than the old systems, as mentioned in previous item,