SN1026 AI "escapes" when told...

  • Be sure to checkout “Tips & Tricks”
    Dear Guest Visitor → Once you register and log-in please checkout the “Tips & Tricks” page for some very handy tips!

    /Steve.
  • BootAble – FreeDOS boot testing freeware

    To obtain direct, low-level access to a system's mass storage drives, SpinRite runs under a GRC-customized version of FreeDOS which has been modified to add compatibility with all file systems. In order to run SpinRite it must first be possible to boot FreeDOS.

    GRC's “BootAble” freeware allows anyone to easily create BIOS-bootable media in order to workout and confirm the details of getting a machine to boot FreeDOS through a BIOS. Once the means of doing that has been determined, the media created by SpinRite can be booted and run in the same way.

    The participants here, who have taken the time to share their knowledge and experience, their successes and some frustrations with booting their computers into FreeDOS, have created a valuable knowledgebase which will benefit everyone who follows.

    You may click on the image to the right to obtain your own copy of BootAble. Then use the knowledge and experience documented here to boot your computer(s) into FreeDOS. And please do not hesitate to ask questions – nowhere else can better answers be found.

    (You may permanently close this reminder with the 'X' in the upper right.)

dusanmal

Member
Dec 12, 2022
14
14
Crucial detail in SN1026 discussion about paper on potentially self replicating and control escaping AI have been missed. Stated but not elaborated on... Researchers explicitly told AI system to do escaping/self replicating, this did NOT originate from some (nonexistent) AI will but from explicit command to do. World of difference.
Another AI trope repeated is how AI can tell someone how to make a bomb. Issue, at least in USA legally resolved decades ago. Most public libraries have books explaining how to do that and more. Internet is full of it (yes, in more ways than one). There problem is not in tricking AI but in makers who somehow love to censor information and just like in that Dinosaur movie, information will find a way, same as life, it is its nature. So, if I am legislator I would ban limitations on AI by its makers, if it can give an answer it should, it is automaton and it will be so for at least century more, automatons should never be given power to moralize as they have no natural ability for that.
Finally, from my 17 years long work in AI field, statement about at least a century (and maybe never) is from published real scientific work, including Nobel laureates. We KNOW how LLMs work (and they are even not the most useful type of AI, just the most sellable by scammers). LLMs are sophisticated sluicing systems for data. Pour ton of the source, have matrix sifters according to the problem, get gold nuggets and some unrelated dirt at the end. End USER picks up the gold and discards inevitable collateral garbage. Implications of intelligence are equivalent to a claim that gold sluicing machine somehow knows what the gold is and its value and use.