Had a discussion with some colleagues about the potential of using #AI for incident auto-mitigation.
It struck me that a lot of concerns boil down to the fact that we are not used to the idea that computers can also exhibit a failure mode we know as "human error". We are used to computers failing "as programmed". When framed that way, we've invented a lot of guardrails to prevent humans from doing dumb mistakes, and many of them can translate into the AI context.
I don't know where I'm going with it. Just a thought.