Crucial detail in SN1026 discussion about paper on potentially self replicating and control escaping AI have been missed. Stated but not elaborated on... Researchers explicitly told AI system to do escaping/self replicating, this did NOT originate from some (nonexistent) AI will but from explicit command to do. World of difference.
Another AI trope repeated is how AI can tell someone how to make a bomb. Issue, at least in USA legally resolved decades ago. Most public libraries have books explaining how to do that and more. Internet is full of it (yes, in more ways than one). There problem is not in tricking AI but in makers who somehow love to censor information and just like in that Dinosaur movie, information will find a way, same as life, it is its nature. So, if I am legislator I would ban limitations on AI by its makers, if it can give an answer it should, it is automaton and it will be so for at least century more, automatons should never be given power to moralize as they have no natural ability for that.
Finally, from my 17 years long work in AI field, statement about at least a century (and maybe never) is from published real scientific work, including Nobel laureates. We KNOW how LLMs work (and they are even not the most useful type of AI, just the most sellable by scammers). LLMs are sophisticated sluicing systems for data. Pour ton of the source, have matrix sifters according to the problem, get gold nuggets and some unrelated dirt at the end. End USER picks up the gold and discards inevitable collateral garbage. Implications of intelligence are equivalent to a claim that gold sluicing machine somehow knows what the gold is and its value and use.
Another AI trope repeated is how AI can tell someone how to make a bomb. Issue, at least in USA legally resolved decades ago. Most public libraries have books explaining how to do that and more. Internet is full of it (yes, in more ways than one). There problem is not in tricking AI but in makers who somehow love to censor information and just like in that Dinosaur movie, information will find a way, same as life, it is its nature. So, if I am legislator I would ban limitations on AI by its makers, if it can give an answer it should, it is automaton and it will be so for at least century more, automatons should never be given power to moralize as they have no natural ability for that.
Finally, from my 17 years long work in AI field, statement about at least a century (and maybe never) is from published real scientific work, including Nobel laureates. We KNOW how LLMs work (and they are even not the most useful type of AI, just the most sellable by scammers). LLMs are sophisticated sluicing systems for data. Pour ton of the source, have matrix sifters according to the problem, get gold nuggets and some unrelated dirt at the end. End USER picks up the gold and discards inevitable collateral garbage. Implications of intelligence are equivalent to a claim that gold sluicing machine somehow knows what the gold is and its value and use.