AI's basis on the human brain

  • Be sure to checkout “Tips & Tricks”
    Dear Guest Visitor → Once you register and log-in please checkout the “Tips & Tricks” page for some very handy tips!

    /Steve.
  • BootAble – FreeDOS boot testing freeware

    To obtain direct, low-level access to a system's mass storage drives, SpinRite runs under a GRC-customized version of FreeDOS which has been modified to add compatibility with all file systems. In order to run SpinRite it must first be possible to boot FreeDOS.

    GRC's “BootAble” freeware allows anyone to easily create BIOS-bootable media in order to workout and confirm the details of getting a machine to boot FreeDOS through a BIOS. Once the means of doing that has been determined, the media created by SpinRite can be booted and run in the same way.

    The participants here, who have taken the time to share their knowledge and experience, their successes and some frustrations with booting their computers into FreeDOS, have created a valuable knowledgebase which will benefit everyone who follows.

    You may click on the image to the right to obtain your own copy of BootAble. Then use the knowledge and experience documented here to boot your computer(s) into FreeDOS. And please do not hesitate to ask questions – nowhere else can better answers be found.

    (You may permanently close this reminder with the 'X' in the upper right.)

Mike Legatt

Member
Feb 5, 2025
5
0
Evanston, IL
I've been spending a lot of time recently thinking about the similarities and differences between AI (especially modern GenAI) and the human brain - and feel like the "neural networks are nothing like the brain" idea hasn't been sitting well with me (although I used to say that all the time).

I have backgrounds in neuropsychology, energy systems engineering, and CS, and like everywhere else, I am seeing how AI is changing habits, thoughts, and plans for critical infrastructure.

In preparation for a talk, I've been going back and reviewing many of the contributors to AI from early to present, as well as how many psychologists and neuroscientists worked in the foundational building and understanding of the technology. I'm also amazed at the degree of biomimicry in AI (both in the ways neuroscientists work to make AI more "human brain-like", and how neural nets can mimic fundamental behaviors of the brain (for example, how image recognition has, at least on the surface, self-organized in a similar way to our human visual pathways). I'll share in case it's of interest to anyone.

On a system as complicated and complex as the human brain, I don't know if we'll ever be able to approximate that level of capability in technology. However, I am fascinated by how many of these foundational thinkers saw AI neural networks as an opportunity to better understand the complex creatures we are.
 
Favorite old story:

IBM taught a computer to play chess via 'game theory', simply valuing each move based on previous experience contributing to ultimate success.

When playing, the human opponent hemmed and hawed and struggled, thinking over each move, made the move, then sat back, expecting the computer to do the same, to take a -l-o-n-g- time to 'think it over' before making the next move.

The computer made it's next move instantly, throwing the human player off their game, so to speak, IBM then programmed in delays.

In other words, 'game theory' was missing 'social connection', so they added it, though hard-wired rather than learned.

A more modern, more human AI might suggest paying attention to everything, not just the game board.

A modern human AI might learn to play football by expecting to avoid the cops on the drive to the game, for example.

Human brains have ruts, familiar pathways, and trims pathways over time for any number of reasons we're still trying to understand.

Modern human AI, what does it 'throw out'?

Is it all game theory, faster and unprejudiced, rut-less, compared to it's programmers, but myopic nevertheless?

Maybe the point id AI to to NOT emulate human brains, not to be better, but to be different, and complimentary, so things we can't do, the way photography can 'see' and show us things we can't see otherwise.

Thanks for the provocative thought piece.

.
 
  • Like
Reactions: Mike Legatt
I work on AI use in diagnostic neurology. On technical side as Physicist, we use fMRI data. From that cross-discipline viewpoint I'd say that in human brain we have a system equivalent to current state of the art AI (which I refuse to call AGI because, first it is not, second, likely realizing the same OpenAI and MS redefined AGI as "AI that can produce 100Bil$ in profits"... no mention of technical capabilities, never mind self awareness). It is cerebellum. It is unconcious but it can be driven by sentient parts to learn complex behaviors and execute them with expertise, later. It can quickly deal with large amounts of data using pathways it was trained on. In real World, we humans are the "gray matter" for AI.
 
  • Like
Reactions: Mike Legatt
I work on AI use in diagnostic neurology. On technical side as Physicist, we use fMRI data. From that cross-discipline viewpoint I'd say that in human brain we have a system equivalent to current state of the art AI (which I refuse to call AGI because, first it is not, second, likely realizing the same OpenAI and MS redefined AGI as "AI that can produce 100Bil$ in profits"... no mention of technical capabilities, never mind self awareness). It is cerebellum. It is unconcious but it can be driven by sentient parts to learn complex behaviors and execute them with expertise, later. It can quickly deal with large amounts of data using pathways it was trained on. In real World, we humans are the "gray matter" for AI.

Agreed! The term AGI (and ASI) unfortunately is far too vague to be useful. It kind of reminds me of a modern-day version of Goodhart's law ("When a measure becomes something you manage against, the value of that measure goes to zero"). Something like, "Once a technological term becomes something we use in marketing materials, board materials, and financial statements, the meaning of that term is significantly diminished."

I do think there is an interesting parallel going on, though. A lot of the work I used to do early in my career was performing neuropsychological evaluations (e.g., IQ, memory, personality, etc.) However, baked into each of those tests were a lot of assumptions, and like the Turing test, perhaps those assumptions need to be re-evaluated in the modern era, especially if we're looking to build definitions that can encompass, and compare, humans and AIs. We've known for decades (e.g., Gardner, 1983) that intelligence isn't a unitary construct anyways, so trying to conflate something so complex into a single number may actually not be that useful.

Certainly, an LLM that is RAG'd to say, "Oops, my bad, I'm sorry," probably doesn't feel remorse. It reminds me of fMRI studies of people with autism who are trained to recognize facial expressions and emotions. Even though they can approach a level of accuracy at, or even above, non-autistic people, the areas of the brain that are activated are different between the two groups. It's nearly the same outcome but via a totally different pathway.
 
Last edited: