agreed, if one is a politician, that is a common occuranceIt's a political "vote grab" rather than anything common sense. Do they not realize all the American computers that have League of Legends on them (also from ByteDance) and maybe they're stealing secrets too... etc...??
It is possible that FB got your details from one of your contacts address book, rather than the other way round. On the other hand, you only need to leave that access open for a few seconds and they will slurp it up.So, guess what it kept doing? That's right, it kept showing me possible friends FROM MY CONTACTS that I might like to follow. So much for saying no. Facebook just ignored my preferences.
Creating a post minutes after registering is not unusual. It would be expected. But, you've got a point about the message content. Makes you wonder. I haven't played with chatGPT. I understand it has strong political bias, probably because of who programmed it. Just because a bot has access to gigabytes of knowledge doesn't make it any more reliable than a human. Just because it mimics human thought doesn't mean it can think. Just because it looks like God's creation in a very limited way doesn't mean it's alive. And, just because it can make fast decisions doesn't mean they're GOOD decisions or APPROPRIATE decisions. When bots start influencing humans or making automated decisions that affect, or destroy, humans' lives, we have to ask if the bot has an agenda because of the way it was designed. I think we should make it illegal for any bot anywhere to make any decision that meaningfully or materially affects a person's life without human intervention.And both have only a single post, mere minutes after being registered.
I found Stephen Wolfram's recent Q&A about ChatGPT informative. Particularly from around the midpoint on, he does a good job of explaining how such an AI can produce such seemingly human like text.Just because it mimics human thought doesn't mean it can think. Just because it looks like God's creation in a very limited way doesn't mean it's alive. And, just because it can make fast decisions doesn't mean they're GOOD decisions or APPROPRIATE decisions.
Yes, but, the AI owners don't want you peeking. If I may pick on the financial industry, I recently was trying to purchase some gift cards for Christmas. Now granted, the company wasn't in my home state, and I was doing several transactions that weren't normal for me. But, I had the debit card number, the security code, the zip code, etc., all legitimate. Visa's big computer in the sky just DECIDED to shut me down. No phone call. No humans. No interaction. Just "Sorry, your transaction failed. Try later". In another case, I was trying to send a critical and time sensitive payment to someone by Zelle. It just failed. Again, no phone call. Nothing. Not even a descriptive error message. I had to go to another account at another bank to complete it. That failed too, but at least it gave me a phone number to call. I verified my identity and confirmed that I wanted to proceed and they sent it through. In deference to the OP of the thread, the same concepts apply to apps like Tik Tok. The algorithm determines if you can post something, how to rank it, maybe whether to ban you, and what you can see. And, they NEVER show you what's behind the curtain. We're going into strange times.it will be up to us to try and peek behind the curtain to see just how each individual AI makes its decisions