In SN1064 Steve have stated something profound about AI (though they switched from it to discussion about anthropomorphizing AI that is separate issue): to paraphrase, it is extremely difficult if not impossible to corral AI. From my 17 yrs of work in AI I would say that it is impossible to corral AI and that we owe that fact to users of it and society in general. Barriers can be erected but they are just paper walls, trivial to breakthrough. They can't be really effective because if AI can do something, preventing it to do so is futile, in its core it is built to do all it can. Related topic for discussion - knowing this I would feel much easier with AI if BigAI corporations would just admit so and explain to the society that there is "gambling going on", pretending that we can build functional barriers to what AI can do is lying and making false expectations leading to people being hurt via expectation that AI platforms can moderate (ex fake imagery). So, should BigAI make paper walls and lie or just stop all limits and tell the public to improve their morality and adjust expectations?
