Chatgpt detector


Recommended Posts

  • Replies 57
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Reminds me of this great battle of AI Holly: Name me a game. Queeg: Chess. Holly: It can be anything. Any game at all. Queeg: Chess. Holly: Draughts, poker, any game at all.

This thing is going to make people even more lazy and complacent. Didn't we just have a member(s) ask a question on this sub using this app? I find it disturbing that people have to start using this A

It can be a helpful tool, however, just like with a calculator, people will start using it as a crux.   As a former high school math teacher, I know this from teaching.  So many kids today do not

23 hours ago, Cairo said:

My issue with this type of application is that it will just regurgitate conventional wisdom and belief--and therefore will stifle human creativity, problem solving and even debate.

That may be ok for educating up to the high school level--where the basics need to be mastered.

However many human mistakes are caused by failing to consider "out of the box" alternatives.

True, but read content generated by real people in journalism over the past 5 years. Not much different, sadly. 

Link to comment
Share on other sites

1 hour ago, helix said:

Rise of the machines: has technology evolved beyond our control? Good opinion piece by James Bridle (not conspiracy crap.) 

 https://www.theguardian.com/books/2018/jun/15/rise-of-the-machines-has-technology-evolved-beyond-our-control-

That was a decent article and provided many interesting events, with a clickbait title. Our tech won’t evolve beyond our control in the next 2-3 decades. However, AI has and will continue to make unpredictable and seemingly non- human decisions. AlphaGo being a prime example of one of the first.

1 hour ago, helix said:

Tongue in cheek comment my friend. More accurately, didn't see it coming

No worries-

  • Like 1
Link to comment
Share on other sites

23 hours ago, Cairo said:

My issue with this type of application is that it will just regurgitate conventional wisdom and belief--and therefore will stifle human creativity, problem solving and even debate.

That may be ok for educating up to the high school level--where the basics need to be mastered.

However many human mistakes are caused by failing to consider "out of the box" alternatives.

True, but read content generated by real people in journalism over the past 5 years. Not much different, sadly. 

Link to comment
Share on other sites

22 minutes ago, KnightsAnole said:

Can you expand on this thought?

Most journalism is rehashed summaries of the same AP press release. Might aw well be written by robots. Buzzfeed and other clickbait sites don't contribute anything to society really either. The number of truly informative journalism outlets seems to shrink every year. 

  • Like 1
Link to comment
Share on other sites

22 minutes ago, dominattorney said:

Most journalism is rehashed summaries of the same AP press release. Might aw well be written by robots. Buzzfeed and other clickbait sites don't contribute anything to society really either. The number of truly informative journalism outlets seems to shrink every year. 

That’s a perfect example of our willingness to be lazy. Something that Kitchen remarked on earlier and, in the face of AI, is extremely concerning. Especially as our kids come up through school and can potentially fake the whole thing. With AI comes a responsibility to our own education,.

….not just echoes in an echo chamber😒

Link to comment
Share on other sites

42 minutes ago, KnightsAnole said:

That’s a perfect example of our willingness to be lazy. Something that Kitchen remarked on earlier and, in the face of AI, is extremely concerning. Especially as our kids come up through school and can potentially fake the whole thing. With AI comes a responsibility to our own education,.

….not just echoes in an echo chamber😒

Could not agree more, but the real problems could be even worse. Let robots teach the kids and how long before they start to believe that the holocaust never happened and that nuclear weapons are just the latest "psy op"?

Not as far fetched as it seems at first glance. Preliminary experiments with AI left researchers appalled at how quickly the artificial brains got polluted with conspiratorial thinking and racist invective as they combed the internet without an ability to distinguish fact from fiction. 

Link to comment
Share on other sites

1 minute ago, dominattorney said:

Could not agree more, but the real problems could be even worse. Let robots teach the kids and how long before they start to believe that the holocaust never happened and that nuclear weapons are just the latest "psy op"?

Not as far fetched as it seems at first glance. Preliminary experiments with AI left researchers appalled at how quickly the artificial brains got polluted with conspiratorial thinking and racist invective as they combed the internet without an ability to distinguish fact from fiction. 

That’s because at the moment it’s learning from the internet and can’t distinguish truth from falsity or even opinion. It will be one of the hurdles AI gets over fairly quickly though, next couple of years. Chatgpt is just combing the internet up to 2021 and is paired with an impressive language model, but, it’s not learning from recent events, nor is it truly AI. With 10 billion in their pocket, OpenAI is going to be updating frequently. I wouldn’t be worried about it denying the holocaust or things like that once it becomes fully functional. It will learn on everyone’s input, not 1 person or a group of people. That’s why it has been open source and will hopefully continue to be. Now, if we have a derelict player come along that gains the power to control it exclusively, that’s a whole different ball game.

  • Like 1
Link to comment
Share on other sites

22 minutes ago, KnightsAnole said:

That’s because at the moment it’s learning from the internet and can’t distinguish truth from falsity or even opinion. It will be one of the hurdles AI gets over fairly quickly though, next couple of years. Chatgpt is just combing the internet up to 2021 and is paired with an impressive language model, but, it’s not learning from recent events, nor is it truly AI. With 10 billion in their pocket, OpenAI is going to be updating frequently. I wouldn’t be worried about it denying the holocaust or things like that once it becomes fully functional. It will learn on everyone’s input, not 1 person or a group of people. That’s why it has been open source and will hopefully continue to be. Now, if we have a derelict player come along that gains the power to control it exclusively, that’s a whole different ball game.

I didn't worry about humans denying the holocaust either, but that changed about 7 years ago. 

  • Like 1
Link to comment
Share on other sites

2 minutes ago, dominattorney said:

Enough humans though. 

I applaud your enduring optimism though, @KnightsAnole

👍 I’m optimistic in the human ability to perceiver, I believe it will be challenged though. I’ve said a few times now, this tech is both exciting and terrifying but, it is inevitable, it could very well be ‘the great filter’ and why we don’t see life everywhere in our galaxy.

Link to comment
Share on other sites

1 hour ago, KnightsAnole said:

That’s because at the moment it’s learning from the internet and can’t distinguish truth from falsity or even opinion.

I am laughing so hard.   What if AI determines that "hate" or some "debunked conspiracy theory" on some subject, no matter how obscure, turns out to be "valid" based on detailed analysis done by the AI.

Imho this will/must happen at some point--just because there is so much that we all take for granted that may not be valid but just shared views.

Then what will the gatekeepers do?

Are they going to send the AI to re-education camp?

 

 

Link to comment
Share on other sites

5 minutes ago, Cairo said:

I am laughing so hard.   What if AI determines that "hate" or some "debunked conspiracy subject" on some subject, no matter how obscure, turns out to be "valid" based on detailed analysis done by the AI.

Imho this will/must happen at some point--just because there are so much that we all take for granted that may not be valid but just shared views.

Then what will the gatekeepers do?

Are they going to send the AI to re-education camp?

 

 

If you think I’m only optimistic about this tech, you haven’t been paying attention. There are no gatekeepers in open source. That’s why this AI was created this way and why it’s important for it to continue that way. It will make all kinds of mistakes and at this point, we have some ability to correct it. In 50 years or so, the AI may grow beyond our ability to even do that. The singularity is not science fiction, it should be something we should anticipate.

Link to comment
Share on other sites

37 minutes ago, KnightsAnole said:

There are no gatekeepers in open source

The real world will be the test on that.

Just watch AI saying something the elite humans hate--and watch what happens next.

It could be as simple as "AI is superior to humans.  The best way to save the planet is to remove all human beings."

Link to comment
Share on other sites

 

7 minutes ago, Cairo said:

The real world will be the test on that.

The real world already is a test of it and it will continue to be.

8 minutes ago, Cairo said:

Just watch AI saying something the elite humans hate--and watch what happens next.

You believe in a lot of conspiracy theories, don’t you.

9 minutes ago, Cairo said:

It could be as simple as "AI is superior to humans.  The best way to save the planet is to remove all human beings."

Now you are getting the picture, it’s a bit cliche, but ok.

  • Like 1
Link to comment
Share on other sites

Just now, KnightsAnole said:

You believe in a lot of conspiracy theories, don’t you.

Guilty as charged.....many of them most folks have never even heard of--and never will unless AI validates them.

😀

Link to comment
Share on other sites

16 minutes ago, Cairo said:

Guilty as charged.....many of them most folks have never even heard of--and never will unless AI validates them.

😀

Now go back and read the thread I first posted about chatgpt.

-about 2 weeks ago.

Link to comment
Share on other sites

On the bright side, AI could very well be headed in the same direction as Wikipedia.  Starts out promising and then becomes corrupted by the ideology of those who are most involved only to see it completely loose its credibility.  There is a reason Wikipedia cant be used as a scholarly resource; it's biased and, in many cases, in accurate.  

I recently saw two stories showing AI has potentially this same fate.  One was in the NY Times (a left wing news source) in which a progressive writer decided to have a conversation with AI about Trump.  Expecting his views to be reinforced, the conversation did not go the way the writer thought it would.  I believe he described it being akin to have an awkward thanksgiving day conversation with a distant conservative uncle.  

The second was in the Daily Wire (a right wing news source) in which they decided to have a conversation about abortion rights.  Although AI touts itself as being politically neutral, the answers given were just rehashed pro-choice talking points, showing it to be political in nature.  Of course in this example, that was the whole point.  

Now in non-charged political subjects, would this be more accurate and useful?  Perhaps, but with people unwittingly (and unknowingly in most cases) replacing their religion with their politics, having "wrong" answers that go against a person's "dogma," no matter how insignificant in the greater context, will cause many who have found themselves in a new religion to have a decreased opinion of AI.  

After all, Wikipedia is very accurate with most topics, just in accurate with anything that involves politics.  Still, that is enough to void it as a scholarly source.  

P.S.

In the end, ChatAIs will probably be ignored by most of the public, partly due to inaccuracy but also due to the 10% of the population who now practice the religion of their politics getting into non-stop and rather charged conversations around what these AIs say.  Most people will just be turned off by the whole thing.  Instead, AIs will be used in tools that we dont really think about.  For instance, in my profession, Photography, the content aware tool in PhotoShop is significantly better now then a decade ago.  It is a major time saver for me in editing.  Of course the downside is those whom have learned "PhotoShopping" without such tools will be better then those who do, much like those who learned on film (especially 4x5 or 8x10) understand exposure & aperture better then those who just learned with digital.  

With all this being said, it is far easier to focus on the downsides, instead of the upsides.  In the early 1900s I am sure one could make the argument that indoor plumbing would soften society by allowing people to bypass tally pots and not deal with one's own shit.  To be honest, I'm rather happy to not deal with it.  

  • Like 1
Link to comment
Share on other sites

7 hours ago, Kitchen said:

On the bright side, AI could very well be headed in the same direction as Wikipedia.  Starts out promising and then becomes corrupted by the ideology of those who are most involved only to see it completely loose its credibility.  There is a reason Wikipedia cant be used as a scholarly resource; it's biased and, in many cases, in accurate.  

I recently saw two stories showing AI has potentially this same fate.  One was in the NY Times (a left wing news source) in which a progressive writer decided to have a conversation with AI about Trump.  Expecting his views to be reinforced, the conversation did not go the way the writer thought it would.  I believe he described it being akin to have an awkward thanksgiving day conversation with a distant conservative uncle.  

The second was in the Daily Wire (a right wing news source) in which they decided to have a conversation about abortion rights.  Although AI touts itself as being politically neutral, the answers given were just rehashed pro-choice talking points, showing it to be political in nature.  Of course in this example, that was the whole point.  

Now in non-charged political subjects, would this be more accurate and useful?  Perhaps, but with people unwittingly (and unknowingly in most cases) replacing their religion with their politics, having "wrong" answers that go against a person's "dogma," no matter how insignificant in the greater context, will cause many who have found themselves in a new religion to have a decreased opinion of AI.  

After all, Wikipedia is very accurate with most topics, just in accurate with anything that involves politics.  Still, that is enough to void it as a scholarly source.  

P.S.

In the end, ChatAIs will probably be ignored by most of the public, partly due to inaccuracy but also due to the 10% of the population who now practice the religion of their politics getting into non-stop and rather charged conversations around what these AIs say.  Most people will just be turned off by the whole thing.  Instead, AIs will be used in tools that we dont really think about.  For instance, in my profession, Photography, the content aware tool in PhotoShop is significantly better now then a decade ago.  It is a major time saver for me in editing.  Of course the downside is those whom have learned "PhotoShopping" without such tools will be better then those who do, much like those who learned on film (especially 4x5 or 8x10) understand exposure & aperture better then those who just learned with digital.  

With all this being said, it is far easier to focus on the downsides, instead of the upsides.  In the early 1900s I am sure one could make the argument that indoor plumbing would soften society by allowing people to bypass tally pots and not deal with one's own shit.  To be honest, I'm rather happy to not deal with it.  

That’s a very well written and thoughtful response Kitchen, I appreciate that, thank you. 

Wikipedia and Ai are quite different. The only thing the two may have in common is, they are both partially constituted through public input. Artificial intelligence has been the dream of man long before even computers existed, Wikipedia, not so much. You only have to read my original post here to get a sense of how much people want this and are already using it:,

“With writers, coders, marketers, and seemingly everyone else in between using ChatGPT to generate content, companies worldwide are staring down a tsunami of AI-generated content.”

I don’t believe it accurate to think this tech is going away or will be ignored. AI is as inevitable as a computer in a technological species, the logical end game of programming writ large.

if people want AI to tell them the ‘truth’, they have already succumbed to lazy thinking, a trademark of humanity. Religion has exemplified this for millennia. Indeed, futurists believe the next great religion will be spawned through AI.

We throw around the term “AI” pretty freely but it’s important to understand the terminology. When AI is used in tools for specific purposes, as in your example, it’s called “narrow AI” or “weak AI”. What the scientists were talking of in that old video I just posted, we now refer to as GOFAI or “good old fashioned AI”.
 

When we think of Hal, we are talking of AGI, artificial general intelligence. This is the Artificial intelligence man has dreamed of and what we are striving for. To program it, we will have to understand the nature of consciousness and intelligence. Both of which we currently do not,  nor are we particularly close. It’s like trying to build a model airplane from parts but having never seen an airplane.

 

ps. Beware of “group-think”

Link to comment
Share on other sites

9 hours ago, Kitchen said:

Starts out promising and then becomes corrupted by the ideology of those who are most involved only to see it completely loose its credibility.  

Wikipedia is not just biased in "political" areas.

It has a deep bias towards academic "consensus" on non-political topics.

One of my passions is the UFO/UAP topic.

For those who don't know there is tons of information out there that goes back decades--many books, many authors, many theories--and of course these days an almost infinite number of websites, podcasts, videos, discussion groups etc.

So--how does Wikipedia deal with this information overload?

Generally they use three or four "debunker" sources to try to discredit almost everybody out there--imho some of the debunking is appropriate but most of it is done by those who have not done the necessary field work to support their views.

I do think the Wikipedia problem is related to the AI problem because the conclusions either reaches depends on the reliability and integrity of the sources they use--and the much larger possible sources they choose to ignore.

The "shadow banning" of search engines (not just against those with controversial views but also sites that don't "pay the freight") is leading to skewed data, analysis and conclusions.

What needs to happen for true AI is that it needs a determination to find out everything--to work around shadow banning and obscurity issues--to start with the premise that all data is treated equally and not to rush to throw out data that does not fit into mainstream narratives.

My tip to AI--all great human discovery comes from "oddball" data that current academia does not understand and cannot explain.   "The Structure of Scientific Revolutions" (Thomas Kuhn) should be must reading for a true AI.

 

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.

Community Software by Invision Power Services, Inc.