Religion and Ethics Forum
General Category => General Discussion => Topic started by: SqueakyVoice on January 03, 2023, 10:11:18 AM
-
I know this subject has been raised elsewhere...
https://www.theguardian.com/media/2022/aug/20/andrew-tate-money-making-scheme-for-fans-of-extreme-misogynist-closes
(https://www.theguardian.com/media/2022/aug/20/andrew-tate-money-making-scheme-for-fans-of-extreme-misogynist-closes...however)this sort of violent stupidity keeps(/ kept?) coming up as algorithms seemed to allow controversy to carry on and spread out. Should these algorithms' intelligence 'learn' how to stop them by ignoring them?
Or could there be a better way?
https://liveapp.inews.co.uk/2022/12/25/ai-robot-calls-out-bigotry-war-and-inaction-over-climate-change-in-channel-4s-alternative-christmas-message/content.html
Happy New Year, you lot.
-
Your link doesn't work - fixed it.
https://www.theguardian.com/media/2022/aug/20/andrew-tate-money-making-scheme-for-fans-of-extreme-misogynist-closes
-
Your link doesn't work - fixed it.
https://www.theguardian.com/media/2022/aug/20/andrew-tate-money-making-scheme-for-fans-of-extreme-misogynist-closes (https://www.theguardian.com/media/2022/aug/20/andrew-tate-money-making-scheme-for-fans-of-extreme-misogynist-closes)
Thanks. I've tried updating my link, but it still doesn't work. :-[
-
There was an article this morning (Fri7-Apr) about sn american lawyer who found out that an AI bot had written about him and accused him of sexually assaulting younger women.
It was on BBC Sounds at about 5to9 (on Today, btw).
They had no idea why he was accused, and he could prove he'd never been to some of the places he was accused of doing those assaults. There's no conclusion, but it seems to suggest AI can't (/ didn't/ won't?) collaborate any evidence before it's published.
-
There was an article this morning (Fri7-Apr) about sn american lawyer who found out that an AI bot had written about him and accused him of sexually assaulting younger women.
It was on BBC Sounds at about 5to9 (on Today, btw).
They had no idea why he was accused, and he could prove he'd never been to some of the places he was accused of doing those assaults. There's no conclusion, but it seems to suggest AI can't (/ didn't/ won't?) collaborate any evidence before it's published.
Seems about right.
I asked ChatGPT and Bard whether they keep traces of the steps taken during generation:
ChatGPT:
As an AI language model, I do not save a trace of the generation steps when generating a response. However, during the generation process, I do use a variety of techniques and algorithms that allow me to generate responses based on the input I receive.
... (description of some of the process)
While I do not save a trace of the specific steps I take when generating a response, I use a complex algorithm that allows me to generate responses that are relevant and meaningful based on the input I receive.
Bard:
Yes, I do save a trace of my generation steps. This trace includes the input prompt, the intermediate steps that I took to generate the response, and the final response itself. This information can be used to debug my responses and to understand how I generated them.
For example, if you ask me a question and I give you an incorrect answer, you can use the trace to see where I went wrong. You can also use the trace to see how I generated a particular response, which can be helpful if you want to understand how I think.
I am still under development, and I am always learning. The trace information can be used to help me improve my responses and to make me more accurate.
"Do you check that the facts in your responses are true?"
ChatGPT:
As an AI language model, I do not have the ability to determine whether the facts in my responses are true or not. I generate responses based on the patterns and relationships that exist within the vast amount of text that I have been trained on.
While I strive to generate responses that are accurate and informative, it is important to keep in mind that I am an artificial intelligence and may not always provide correct information. It is always a good idea to verify the information provided by any source, including an AI language model, with additional sources and research.
If you ever come across any information in my responses that you believe to be incorrect or misleading, please let me know and I will do my best to provide a more accurate and reliable response.
Bard:
Yes, I do check that the facts in my responses are true. I use a variety of methods to do this, including:
Fact checking: I use fact-checking websites and databases to verify the accuracy of the information I find.
Common sense: I use my knowledge of the world and common sense to evaluate the accuracy of information.
My training data: I was trained on a massive dataset of text and code, which includes a lot of factual information. I use this information to help me evaluate the accuracy of new information.
I am still under development, and I am always learning. I am always working to improve my ability to check the facts in my responses and to make sure that the information I provide is accurate.
There is no set of internationally recognised laws that AI implementations must follow to allow validation and verification. If there is a pause in development - that is part of what needs to be put in place.
(Hopefully the AIs won't ask for religious exemptions :) )