SN 1004 A Chat with GPT

  • Be sure to checkout “Tips & Tricks”
    Dear Guest Visitor → Once you register and log-in please checkout the “Tips & Tricks” page for some very handy tips!

    /Steve.
  • BootAble – FreeDOS boot testing freeware

    To obtain direct, low-level access to a system's mass storage drives, SpinRite runs under a GRC-customized version of FreeDOS which has been modified to add compatibility with all file systems. In order to run SpinRite it must first be possible to boot FreeDOS.

    GRC's “BootAble” freeware allows anyone to easily create BIOS-bootable media in order to workout and confirm the details of getting a machine to boot FreeDOS through a BIOS. Once the means of doing that has been determined, the media created by SpinRite can be booted and run in the same way.

    The participants here, who have taken the time to share their knowledge and experience, their successes and some frustrations with booting their computers into FreeDOS, have created a valuable knowledgebase which will benefit everyone who follows.

    You may click on the image to the right to obtain your own copy of BootAble. Then use the knowledge and experience documented here to boot your computer(s) into FreeDOS. And please do not hesitate to ask questions – nowhere else can better answers be found.

    (You may permanently close this reminder with the 'X' in the upper right.)

floatingbones

Member
Dec 2, 2024
6
3
I can second Leo's recommendation of Stephen Wolfram's short book "What is ChatGPT Doing... and Why Does it Work?" (2023). It's available from the brand-A mail-order shop or from your local bookseller. You can also find a shorter online essay that Stephen published on his blog on this topic early in 2023. Most of Stephen's recent books have been published for free on his online blog; he is also an explainer-in-chief for his audience. I have his book in my physical inbox; I will read through it this week. Wolfram Alpha wasone of the first online AIs; I see it is discussed extensively in the little ChatGPT book.

Wolfram Research has just announced a product that uses ChatGPT to generate Wolfram Language code. Stephen's new blog entry is called Useful to the Point of Being Revolutionary: Introducing Wolfram Notebook Assistant. The Notebook Assistant is a ChatGPT-based code generator for the Wolfram Language that is completely trained in the Wolfram Language. Stephen is not prone to hyperbole; this sounds like a genuine breakthrough product. I have the sense that Theodore Gray–the guy who made the Wooden Periodic Table Table–was heavily involved in the creation of this platform. Stephen notes that the code generator is far better than he is with the Wolfram Language.

I had a run-in with Google Gemini about 6 months ago. I asked Gemini about Gerald Pollack, who is a professor at the University of Washington. Gemini would tell me nothing about Pollack if I asked it who "Gerry Pollack" was. OTOH, it was perfectly happy to report about the person if I used the name "G H Pollack". This was truly absurd, because Gemini would use Pollack's full name in its response. You can see my entire dialogue in the attached file. This is not an isolated failure; someone trained Gemini with some truly bad data and/or prompts.

ChatGPT will try its hardest to answer your questions–to the point of totally making up #!$$. Your description of the MASM experience was spot on. One could say that the AI is an eternal optimist, but that is giving it far too much credit. That Ai–all AIs–have no shame.
 

Attachments

  • Conversation with Gemini about Gerald Pollack 7:26:2024.pdf
    293.8 KB · Views: 85
  • Like
Reactions: Badrod
For kicks I asked Grok to write a simple SQL query to show me all primary contact records from a GoldMine CRM database. I just wanted to see if it knew about GoldMine, how good it was, and if it could show me any interesting methods I hadn't considered before. I have been SQL programming for GoldMine for 29 years now.

It "acted like" it knew what it was talking about, but the answer it gave used a table name that had never existed in GoldMine.

So I pointed out the incorrect table name and it then apologized profusely and acted like it was an honest mistake, and corrected the table name, but had something else wrong in the results it gave me.

So I pointed out the incorrect nature of its corrected queries, and it again apologized profusely and acted like it's an honest mistake again.

It obviously doesn't know how to write SQL queries for GoldMine, though it presented several and some would have worked.

But here's my complaint. Computers are stupid as a bag of hammers. STUPID! AI is stupid. STUPID! They are inanimate Rube Goldberg machines, and they will make more mistakes in one second than you'll make in your lifetime, all the while insufferably pretending to be smart. When people program computers to personify itself, I can barely stand to look at it. It's like the world is locking itself into a permanent voicemail voicejail.

So I said to Grok, "The more you pretend to be human the more I dislike you", to which it again groveled and apologized, and told me it would stick to answering questions in a strictly technical manner.

The whole experience is insufferable. I might as well be typing with 1990's "Rude Dos" or some other 1000 Monkeys Typing program.

OK... I feel better. ;)
 
It "acted like" it knew what it was talking about, but the answer it gave used a table name that had never existed in GoldMine.

In my limited experience with many of the AIs, this has been my experience. As funny as it sounds, I think the technical term for this is "hallucination."

It amazes me how many people search for something and treat the answer as fact!
 
In my limited experience with many of the AIs, this has been my experience. As funny as it sounds, I think the technical term for this is "hallucination."

It amazes me how many people search for something and treat the answer as fact!

If It's on the internet it must be true.png
 
I just went to YouTube to see what it might recommend for the first time in a few weeks and I noticed this and just started watching it:

3h:15m YouTube Video by Stephen Wolfram entitled "What is ChatGPT doing...and why does it work?"

One thing I want AI to do is to be able to use those writing tablets (sort of like the artist/graphics drawing tablet) only use it for Math/Science stuff and the AI could give hints of something and push our math skills forward. I stop doing math 100% after given out of college, but I felt bad in my youth for having so little math and science knowledge. Makes me think I should get more informed. Anyway, I am going to try and watch this. Also, if you did not already know, logging into youtube makes it easier to watch these long videos because you can stop and youtube remembers where you left off, under history on the left. I use that often with these kinds of videos.
 
I can second Leo's recommendation of Stephen Wolfram's short book "What is ChatGPT Doing... and Why Does it Work?" (2023). It's available from the brand-A mail-order shop or from your local bookseller. You can also find a shorter online essay that Stephen published on his blog on this topic early in 2023. Most of Stephen's recent books have been published for free on his online blog; he is also an explainer-in-chief for his audience. I have his book in my physical inbox; I will read through it this week. Wolfram Alpha wasone of the first online AIs; I see it is discussed extensively in the little ChatGPT book.

Wolfram Research has just announced a product that uses ChatGPT to generate Wolfram Language code. Stephen's new blog entry is called Useful to the Point of Being Revolutionary: Introducing Wolfram Notebook Assistant. The Notebook Assistant is a ChatGPT-based code generator for the Wolfram Language that is completely trained in the Wolfram Language. Stephen is not prone to hyperbole; this sounds like a genuine breakthrough product. I have the sense that Theodore Gray–the guy who made the Wooden Periodic Table Table–was heavily involved in the creation of this platform. Stephen notes that the code generator is far better than he is with the Wolfram Language.

I had a run-in with Google Gemini about 6 months ago. I asked Gemini about Gerald Pollack, who is a professor at the University of Washington. Gemini would tell me nothing about Pollack if I asked it who "Gerry Pollack" was. OTOH, it was perfectly happy to report about the person if I used the name "G H Pollack". This was truly absurd, because Gemini would use Pollack's full name in its response. You can see my entire dialogue in the attached file. This is not an isolated failure; someone trained Gemini with some truly bad data and/or prompts.

ChatGPT will try its hardest to answer your questions–to the point of totally making up #!$$. Your description of the MASM experience was spot on. One could say that the AI is an eternal optimist, but that is giving it far too much credit. That Ai–all AIs–have no shame.

One of my students had an interest in AI, so we took a "side quest". He took a summer course that went over basic Machine Learning theory and showed him how to use tensorFlow functions in Jupyter Notebook like thing. One amazing resource I found for a better visual understanding of ML and later LLMs was a series by Youtube channel 3Blue1Brown. Grant creates math visualizations on his channel and dived into the math of weights in ML models. His series is here: https://www.3blue1brown.com/topics/neural-networks.

Our sidequest started with creating a ML model that would solve XOR logic, a very very small dataset. Then we went to create a model that would recognize hand written digits. It was a great journey that let me better understand the limitations of AI. Even with that understanding, the conversational tone adds to the illusion that it is more.

A side effect of using chatGPT is that it makes me a better rubberducker and communicator because I need to articulate details of a problem for decent results.
 
  • Like
Reactions: Steve and Badrod
So, I did write a Windows Driver once quite a while ago, but I am currently using ChatGPT to help me write a filter driver to log every process that make a dns quest and wow does ChatGPT make it easy. It does make mistakes, but is way easier to ask it question rather than a search engine and go to a page and hope if what I am wanting. I really like how it is all in one place. This code was written in like 2 minutes:

1735528777841.png
 
Last edited:
So, I did write a Windows Driver once quite a while ago, but I am currently using ChatGPT to help me write a filter driver to log every process that make a dns quest and wow does ChatGPT make it easy. It does make mistakes, but is way easier to ask it question rather than a search engine and go to a page and hope if what I am wanting. I really like how it is all in one place. This code was written in like 2 minutes:

View attachment 1466
I have mixed feelings about this convenience, and I think it has a cost. Maybe its my ADD, but traditional internet search will allow me to broaden my understanding as I go on what I call "Learning Tangents" when i discover a new library or clever use of an existing one. It won't be directly what I am looking for, but I will file it away as potentially useful for the future. With AI, its a more focused solution, not 100 percent accurate, but if you know the basics, you can tweak what is wrong most of the time. You can do so much more now with just High School CS Courses, since AI can write your code and you just tweak it, so I feel High School CS has more value for now.

The other concern is if people are getting information from chat responses, then website promoting content with ad support do not get money so they might not provide content in the future, or hide it behind an account/pay wall. I am being a bit overdramatic, but if people are not rewarded well enough to make new content, then we are left with a static amount of knowledge leading to a "Dim Ages" where new knowledge will be harder to come by.
I learned recently a high level limited overview about how attribution works with the use of cookies when reading how the service Honey operates by potentially stealing attribution. Perhaps if a chatGPT service has a browser extension, it could pass attribution from its source for its active web searches.
 
I agree in the sense that a person can make fully working program with very little understanding of the code. The last time I really programming in C++ was prior to 2016 when I was a student. After that I really never worked in C++. For some reason most of the jobs in Maine, places like Portland, Augusta, and more rural places want managed languages most commonly C#, which, unfortunately, has been where I have always been employed.

The main reason I like these AI tools is not because I could not write the code without them, it is because everywhere that I have worked, the State, an insurance company, and a public company that produces public accounting software all have very poor development practices and some have poor project management. Every single job that I have had the stress comes from poor quality of code and other developers that have had some kind of a misguided belief that they are better developers than they really are and have very difficult personalities.

With that, I would rather work with an AI that has no personality and is nothing but a new technology. Also, you can give these AI bad code and it will tell you that code is bad. That is not to say that previously there were no automated systems that would tell a developer when they are given bad code, but in my experience, managers are often in denial about bad code or they simply say upper management is not going to change until it affects profits.

I have some respect for management communicating that nothing will change until it affects profits, because that sounds honest, but I am not willing to cooperate with any business that is in denial about how bad their code is. That puts unnecessary stress on the engineer.

AI is just one more tool, sometimes I write code in a plain text edit and sometimes I use an IDE. If you are a well adjusted developer, without something wrong with you, then all you really need is an honest desire to do the work (or your job).

I had more than one job, where I would literally hear about concerns where other developers don’t know about simple things like language features. I guess I am not sure where others got their degrees, but I had to read Shakespeare, solve differential equations, learn about Newtonian physics (which I did a lot of other students' homework strangely, I don’t understand why it is so hard for them). So I get kind of angry when I hear about people that have been doing the same kind of job for more than ten years and there is doubt about being able to figure out a language feature. I mean do they also need to have someone dress them in the morning?

The best thing about AI is that you can tell it when it is wrong and often does not argue. The most memorable experience as a computer programmer was raising concerns with someone that had been employed much longer than me and he said “Do you think you are smarter than me?”. I almost started laughing, but he seemed pretty mad. So I guess you could try arguing with AI, and argue something that you know is false and see what happens.

Chad
 
  • Like
Reactions: zach
Amen to calling out the hubris and defensiveness to code, and inability to solve problems for some people. My mom manages an archaic database for a company, she should have retired years ago. Anyone they send to replace her is not up to the task, so she got to negotiate her own work conditions, like working from home.

It is understandable, but also kind of sad and a bit ironic for the solution to uncollaborative programmer personalities is to have programmers stop working with people. But after reading that last sentence, it does make sense.
One thing about chatGPT is that it data mined lots of stuff for its model, the forums of stackexchange for one, but also social emotional sources that might help make incremental changes to a hard to a work with environment.


Then the question is "Is it worth the effort?"
 
can make fully working program with very little understanding of the code.
As a user of such a program, written that way, how do you feel? It's highly likely such a program is riddled with dangerous code that a newbie didn't have the wherewithal to know to avoid. Things like storing a password in clear text, for example. I consider it akin to this thought experiment: How would you feel driving a car that was repaired by an inexperienced mechanic that was getting his fixing advice for your brakes from ChatGPT?
 
Yes, you are 100% correct. I think is it is just good for generating ideas and then showing a maybe about what the code could be. But it would be like any new code, you should have to go over it very carefully and you would have to be able to make a judgement call as to if you as a human, you understand the code. It works as a teaching tool also. I like asking it to write code in very small chunks and then verifying that it makes sense. Like I started a kernel mode driver for watching all the processes on a system and then printing out when a process makes a query, it not only gives the code, but attempts to explain from a high level what is required to make it work and why it might work. Saying thinks like you can make a filter driver for Windows 7 and higher. Then you can ask it what a filter driver is and ask if there any different way to code the same thing. Also you could ask it about the plug and play manager in kernel mode. You have a kernel mode plug and play manager as a kernel level developer and why that is useful. You could ask what programmer would have to deal with, without a kernel mode plug and play manager.

The reason I like it, is likely the reason that a company like GRC might not, because it allows me to be much more lazy that I other would be. However, ChatGPT is not just about writing code, it is about learning, and everything you get out of it, should be verified and attempt to understand 100%. Even before I every heard of chatGpt a from a few years agos, my approach to learning is to decompose things into very simple chuncks until it is idiot proof. That is some what the reason I like assembly language, not so much because it is fun for me to code in, but because if you know even the simple things about a generic assembly language, you have a much better idea of how digital computers work. You can even take that knowlege and go backward and look at the development of computers all the way to back to vacum tubes and try to understand where assembly language came in and why.

But yes, I agree.
 
Last edited:
Also, I was just reminded of a podcast in which Neil deGrass Tyson had a guest speaker and they talked about what general and special relatively is and the math skills required to understand each. Before I watched it, I did not know that according to him we would have figure out relatively even without Einstein because of the timing problems of GPS satellites . It is was so weird that I don't even remember it. Should this be fact checked?