ProgrammerHumor

youAreAbsolutelyCorrectIMadeItUp

youAreAbsolutelyCorrectIMadeItUp
https://i.redd.it/s2c555npjmbf1.jpeg
Reddit

Discussion

Gtantha
:rust:

So everything. It makes everything up. The chance that the made up thing coincides with reality is increasing. And that's what we perceive as artificial "intelligence" becoming better. There is still no understanding.

1 day ago
redheness
:p:

But it really get better at convincing you that this bullshit is true, making AI more dangerous and their users more stupid.

1 day ago
Excellent-Refuse4883

better and convincing you that this bullshit is true

Definitely does better than my boss at this point

15 hours ago
skwyckl
:elixir-vertical_4::py::r::js:

This is one of the things that bug me the most. Is it really that difficult to instruct the LLM to say "I am sorry, I don't know about that particular topic"?!

1 day ago
lare290
:c::cp::cs::j::py:

"I don't know" is not part of the training data set. it's literally just an extrapolation machine, going "if a gives f(a) and b gives f(b), then surely c gives f(c)"

1 day ago
skwyckl
:elixir-vertical_4::py::r::js:

If they can't find any data about it, then respond with "I don't know", I am sure they can make that work somehow

1 day ago
metaglot

Thats not how NN work. They will fit the curve no matter how badly it fits.

But also ...;

When you figure out how to unilaterally make people respond like that, let me know.

1 day ago
Zeikos

You kind of can get a "I don't know" - but not super reliably - by measuring the model perplexity.
Basically you look at the probability distribution of candidate tokens and if the variance is high (aka confidence is low) then you warn the user about that.

That said, it's a quite brittle strategy since that perplexity can be high for reasons different to the model not knowing

1 day ago
Ecstatic_Student8854

If it can be high despite it knowing then it’d give more false positives than false negatives on admittance of not knowing about a topic, right?

I.E. if it doesn’t know about a topic its very likely to say so, but if it does know stuff it might still say it doesn’t. Seems like a fine enough solution to me, especially compared to whatever we have now.

1 day ago
anonymity_is_bliss

The whole technology is based off of inference. If it responded with "I don't know" for anything not directly in its training data, it would just be a big hashmap; the whole use of prior data to extrapolate onto new data is the whole point of machine learning and AI in the first place.

1 day ago
RiceBroad4552
:s:

And that's exactly why this approach can never work reliably.

A stochastic parrot is a nice toy, and you can use it to produce convincing bullshit, but it's definitely not the path to real AI.

23 hours ago
paul5235

Yes, it is.

20 hours ago
Dotcaprachiappa
:s:

Yes it is

12 hours ago
Middle-Parking451

Smt like claude and private llms ale do that.

8 hours ago
Alternative_Fault_12

There is no function in the LLM for him to determine that he doesn't know about the topic that he is not trained about or your very specific question. That is a function that people are still trying to add to him but it is really hard to do it. It's easier to make a function to return a response but not to determine that he doesn't know enough for him to return a response that says in short that he really doesn't know.

1 day ago
theoht_

AI When you ask about anything

FTFY

1 day ago
seoizai1729

please don't hallucinate bro they got my family 😭

1 day ago
iMac_Hunt

You’re so right! Here is the correct solution…

Good catch! The correct solution is…

Spot on! Here is the corrected solution…

1 day ago
RiceBroad4552
:s:

I wonder that not more people get tired by that and just accept that it does not work, and actually never will with this "token guessing" approach.

23 hours ago
Gamechanger925

True.. it happens many times when AI gets confused with prompts and hallucinates basically.

1 day ago
renrutal

GPT o1 almost went full Jihad on me for telling it that Zig's @intCast only takes one argument, and not two like it insisted it did. It did not back off at all.

The bullshitting is strong in that one.

1 day ago
RiceBroad4552
:s:

It's futile to try to "argue" with an LLM. It will always just continue the prompts, with whatever it "memorized" from training, or what is hardcoded in system prompts.

It can't learn from the prompts as it can't reason.

23 hours ago
santient

Source? My source... is that I made it the fuck up!

21 hours ago