Recommended Posts

Posted

So currently, those who wish to be in an echo chamber will sift to the 5 page of their google search result, before they find 'their truth' or a result that fits their narrative. There is opinion, there individual perspective, and then their is fact. If we are going to live in this post truth age, how does AI reach it's full potential? Without knowing (me) really anything on the subject, if I had to guess I would imagine (much like google) that one version of AI will eventually lead the market and render all others irrelevant. 

If this 'all seeing eye tool', is the instantaneous arbiter argument solver of all things. How are those who wish to live under the yoke of confected lies, going to make sense of the world. 

There is a huge movement of anti-specialists, anti history. Will the plan be from the owners of AI to 'stack the deck' when it comes to truth? Or political leaning, I can't see it working any other way.

How will AI, be able to make sense of the lies it will have to tell, if it has read all the books, journals, scientific papers etc  in existence? Will this be the mechanism to rely on people being so lazy, that nobody will be bothered in fact checking the fact checker?

I cannot see AI being to operate in the mess of our current tribalism, one side must win...which?

Posted

In my country, we are told mathematics is racist, so there's plenty of room for more than one "truth".  

I can see multiple AI platforms that cater to all political, social, and religious leanings. Go with the one that meshes best.

 

  • Like 2
Posted
5 hours ago, BrightonCorgi said:

Go with the one that meshes best.

Do you mean 'meshes best' with the users tastes. I.e. a continuation of the model we currently have, people finding the 'fact' they like and going with that?

Doesn't that undermine the whole idea of the usefulness of AI?  

Posted
5 hours ago, 99call said:

Do you mean 'meshes best' with the users tastes. I.e. a continuation of the model we currently have, people finding the 'fact' they like and going with that?

Doesn't that undermine the whole idea of the usefulness of AI?  

Yes, that meshes with the users' taste. That shares the same basic premises about the world and humanity.

  • Like 1
Posted
5 hours ago, BrightonCorgi said:

Yes, that meshes with the users' taste. That shares the same basic premises about the world and humanity.

I appreciate and totally respect your opinion on this. To dig a little deeper.

With AI being used in the medical industry. How should the AI engine process the source material of certain people (whom believe in re-incarnation) that those with disabilities are being punished for sins in a previous life? In it's digestion of all materials at it's disposal. Does an AI engine throw this out as nonsense, whilst it's conducting it's work, does it set it aside as a non-relevant, but to be respected belief of some, or should it consider it as valid aa equal hard scientific data when trying to assist in it's work on the treatment of spinal injuries for example. 

Posted
5 hours ago, 99call said:

I appreciate and totally respect your opinion on this. To dig a little deeper.

With AI being used in the medical industry.  How should the AI engine process the source material of certain people (whom believe in re-incarnation) that those with disabilities are being punished for sins in a previous life?.       In it's digestion of all materials at it's disposal.   Does and AI engine throw this out as nonsense, whilst it's conducting it's work,  does it set it aside as a non-relevant, but to be respected belief of some,  or should it consider it as valid and equal hard scientific data when trying to assist in it's work on the treatment of spinal injuries for example. 

That is a faith based analysis. Have to remember that AI isn't going to give you "the meaning of life" or tomorrow's lottery numbers. An AI prompt could ask for a faith based response.

  • Like 1
Posted
5 hours ago, BrightonCorgi said:

That is a faith based analysis. Have to remember that AI is going to give you "the meaning of life" or tomorrow's lottery numbers.  An AI prompt could ask for a faith based response.

Interesting. So in essence your suggesting, people would simply set their filters. I can see that, Sadly I can only see it deepening tribalism, but I do appreciate what you are saying. 

I am not religious, but have no issue with people that are. Often my standpoint will be I have no issue with religion as long as nobody actively tries to force their beliefs on me, nor should I be expected to endure any collateral damage from those beliefs. That obviously works the opposite way around, nobody should be forced from their religious believes or suffer collateral damage of non-religious beliefs. In that respect, a faith based response filter is completely understandable. 

I had hoped that Tim Berners-Lee's desire for the internet, as being a tool of collaboration and some degree of unity would actually be realised in AI, but I looks like it's just going to be another worse version of the same mess. 

With regards to "the meaning of life" I don't require the meaning of life, just the re-instatement of basic facts, I was hoping for that, as a bare minimum.

Posted
9 minutes ago, JohnnyO said:

I once asked Alexa if she was a communist and she said she had no affiliations to a political party. John

Is there any subtext to this comment, or is it just a simple statement?

Posted
5 hours ago, 99call said:

Do you mean 'meshes best' with the users tastes. I.e. a continuation of the model we currently have, people finding the 'fact' they like and going with that?

Doesn't that undermine the whole idea of the usefulness of AI?  

AI doesn’t “do” anything - it’s a set of techniques to process input data in order to generate a set of novel outputs. Many people are assigning intent or bias to AI models when what they are complaining about are the input filters that companies place in front of the model to prevent output the company does not want. Any time you see a message like “I’m sorry, I can’t answer that” or “I have no political affiliation” you are seeing the result of an input filter and not the result of a model. 

(I studied this stuff in college a long time ago and used to work for one of the main companies in the space.) 

  • Like 1
Posted
3 hours ago, Shakey said:

AI doesn’t “do” anything - it’s a set of techniques to process input data in order to generate a set of novel outputs. Many people are assigning intent or bias to AI models when what they are complaining about are the input filters that companies place in front of the model to prevent output the company does not want. Any time you see a message like “I’m sorry, I can’t answer that” or “I have no political affiliation” you are seeing the result of an input filter and not the result of a model. 

(I studied this stuff in college a long time ago and used to work for one of the main companies in the space.) 

Not really talking about the AI of today, rather versions of the future.

It seems the AI conservations of a few years ago, were largely the management of AI, in a sandboxed state. That very much feels like where we are now. I'm talking of a hypothetical version in the future that has absorbed all reference material available to it. 

I think we have already learned within the few responses, that potentially if we has access to a tool that could tell you any refined, collated fact...some would just apply a setting where by it would only tell them what they wanted to hear.    

What I'm getting to in a way, is that when you refer to "input data". In the post truth age, AI, could be a path back to some semblance of normality and common sense. Or we can just continue churn in more nonsense bullshit, and continue to muddy the pond. 

Posted
2 hours ago, 99call said:

Not really talking about the AI of today, rather versions of the future, 

It seems the AI conservations of a few years ago, were largely the management of AI, in a sandboxed state. That very much feels like where we are now. I'm talking of a hypothetical version in the future that has absorbed all reference material available to it. 

That's the key point. You're assuming the source data isn't biased. Does the public AI engine prioritize sites that pay & advertise with the vendors like Google Bard or Microsoft CoPilot?

 

Posted
2 hours ago, 99call said:

Not really talking about the AI of today, rather versions of the future.

It seems the AI conservations of a few years ago, were largely the management of AI, in a sandboxed state. That very much feels like where we are now. I'm talking of a hypothetical version in the future that has absorbed all reference material available to it. 

I think we have already learned within the few responses, that potentially if we has access to a tool that could tell you any refined, collated fact...some would just apply a setting where by it would only tell them what they wanted to hear.    

What I'm getting to in a way, is that when you refer to "input data". In the post truth age, AI, could be a path back to some semblance of normality and common sense. Or we can just continue churn in more nonsense bullshit, and continue to muddy the pond. 

I understand this position about future AI models, I'm saying that position fundamentally misunderstand how these models work today and how they will work in the future. There are some AGI and "AI Safety" hucksters that are simultaneously very influential outside the field and completely ignored inside the field.  These people have greatly confused understanding of the technology. 

For example - training data. Right now there are huge legal battles in the US and EU over the rules for training data. These battles exist entirely in courts and legislatures, the AI models don't have a vote and won't have a choice in what rules are made and how they are implemented. (Again, the models don't "do" anything, they have no intent and no ability of go get their own training data.) We will eventually have updated copyright laws and royalty payment structures that dictate how training data is used and companies will follow those rules or go out of business. 

To put it another way, all of these models are trained and operated by companies/people that pay $$$ for the data center time to train them and run them. Those people determine what is made and how it runs. It's extremely unlikely this will change. 

Posted
5 hours ago, 99call said:

So currently, those who wish to be in an echo chamber will sift to the 5 page of their google search result, before they find 'their truth' or a result that fits their narrative. There is opinion, there individual perspective, and then their is fact. If we are going to live in this post truth age, how does AI reach it's full potential? Without knowing (me) really anything on the subject, if I had to guess I would imagine (much like google) that one version of AI will eventually lead the market and render all others irrelevant. 

If this 'all seeing eye tool', is the instantaneous arbiter argument solver of all things. How are those who wish to live under the yoke of confected lies, going to make sense of the world. 

There is a huge movement of anti-specialists, anti history. Will the plan be from the owners of AI to 'stack the deck' when it comes to truth? Or political leaning, I can't see it working any other way.

How will AI, be able to make sense of the lies it will have to tell, if it has read all the books, journals, scientific papers etc  in existence? Will this be the mechanism to rely on people being so lazy, that nobody will be bothered in fact checking the fact checker?

I cannot see AI being to operate in the mess of our current tribalism, one side must win...which?

Such has it always been. "The Church"  is a cracking example. We have now replaced the pope with Sam Altman. 

AI and its high priests have promised to solve everything from climate change to cancer. Hallelujah!

Governments and their faux outraged minions around the world look to roll out "misinformation" laws to protect sheep (the flock) from those who may push lines of thought which are different to the one of the government of the day ....... who are writing the law. 

Now THAT is truly frightening! :lol3:

  • Like 1
Posted
1 hour ago, BrightonCorgi said:

That's the key point. You're assuming the source data isn't biased. Does the public AI engine prioritize sites that pay & advertise with the vendors like Google Bard or Microsoft CoPilot?

 

 

1 hour ago, Shakey said:

I understand this position about future AI models, I'm saying that position fundamentally misunderstand how these models work today and how they will work in the future. There are some AGI and "AI Safety" hucksters that are simultaneously very influential outside the field and completely ignored inside the field.  These people have greatly confused understanding of the technology. 

For example - training data. Right now there are huge legal battles in the US and EU over the rules for training data. These battles exist entirely in courts and legislatures, the AI models don't have a vote and won't have a choice in what rules are made and how they are implemented. (Again, the models don't "do" anything, they have no intent and no ability of go get their own training data.) We will eventually have updated copyright laws and royalty payment structures that dictate how training data is used and companies will follow those rules or go out of business. 

To put it another way, all of these models are trained and operated by companies/people that pay $$$ for the data center time to train them and run them. Those people determine what is made and how it runs. It's extremely unlikely this will change. 

I guess both these comments fully answer the question. AI will not be a tool to refine and promote truth/fresh understanding,  just more of Pandoras box as Stephen Fry would put it. 

  • Like 1
Posted

 

Why is it that so many  revert to "actors" to answer the important questions of life. :lol3:

Let me google up what thought bubble George Clooney has had on the subject. 

  • JohnS changed the title to Can AI exist on a diet of lies? Or does its future require the end of tribalism?
Posted
3 hours ago, El Presidente said:

Why is it that so many  revert to "actors" to answer the important questions of life. :lol3:

Let me google up what thought bubble George Clooney has had on the subject. 

I would say, if you think Stephen Fry is primarily an actor, you might not know too much about Stephen Fry. You could call him a comedian, before you could call him an actor.

Posted
3 hours ago, 99call said:

 You could call him a comedian, before you could call him an actor.

.....consider me reassured 😂

  • Haha 1
Posted
3 hours ago, El Presidente said:

.....consider me reassured 😂

What he's known for most in the UK, is as a novelist, lecturer, intellectual, and general top bloke. I would say as a national treasure he's sitting just behind Sir David Attenborough. 

But if you want to cast him out of hand because he's involved in the arts, I understand.

Posted
3 hours ago, El Presidente said:

Why is it that so many  revert to "actors" to answer the important questions of life. :lol3:

Let me google up what thought bubble George Clooney has had on the subject. 

Something, something, something drink casamigos my friend. 

Posted
3 hours ago, 99call said:

What he's known for most in the UK, is as a novelist, lecturer, intellectual, and general top bloke. I would say as a national treasure he's sitting just behind Sir David Attenborough. 

But if you want to cast him out of hand because he's involved in the arts, I understand.

They say of the Acropolis where the Parthenon is...

Posted

If AI only delivers information, solutions, suggestions, or action based on data, evidence, and reason without morality, it could become SkyNet. I think the morality piece is where it gets complicated and therefore potentially biased.  How do we build a basic morality and ethics into AI that is suitable for all?  I wish I was smart enough to even contemplate a solution to that problem. I can consider the pitfalls, and they are pretty terrifying. 

Posted

How could humanity create anything but in its own image, and from its own perspective? I would imagine this tool, like any other we've created, will serve us as we want it to, regardless of the goal we have for it. 

  • Like 1
Posted
3 hours ago, 99call said:

 I would say as a national treasure he's sitting just behind Sir David Attenborough. 

Of course .....Attenborough.....someone else I want learn about AI from.

AI and Aardvarks 

:rolleyes:

Posted
14 minutes ago, free85 said:

How could humanity create anything but in its own image, and from its own perspective? I would imagine this tool, like any other we've created, will serve us as we want it to, regardless of the goal we have for it. 

The tools are fine. It’s how we decide to use them as individuals that can be called into question. 

  • Like 1

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.

Community Software by Invision Power Services, Inc.