ProgrammerHumor

futureIsBleak

futureIsBleak
https://i.redd.it/irntjuyh0ubf1.jpeg
Reddit

Discussion

C_umputer
:py:

Remember how every scary AI in scifi stories eventually starts improving itself? Yeah that shit aint happening. A small inaccuracy now will only snowball into barely functional model in the future.

2 days ago
KharAznable

Do they ever response with "marked as duplicate, closed"?

2 days ago
bapman23

The only time I asked a question on stackoverflow I got downvoted and they shamed me in comments that my questions "aren't clear".

Funny thing is it was about a poorly documented Azure service (at that time) and I was contacted by the team and they clearly understood my issue and they even added some new documentation based on my questions. It all went via e-mails.

Yet, I was downvoted on SO.

So after that, I always went straight to Azure support and it was much faster and convenient than being downvoted and shamed in comments for no real reason.

2 days ago
Brief-Translator1370

StackOverflow is so incredibly pedantic about things that didn't matter it just became useless. Questions constantly marked as duplicates even if they required different answers

2 days ago
FlakkenTime

Gotta get those points!

2 days ago
OmgzPudding

Yeah it was (and still is, I'm sure) ridiculous. I remember seeing a question closed as duplicate, citing a 15 year old post using all different versions of similar technologies. As if nothing significant changed in that time

2 days ago
nickwcy

I would’t ask on SO unless its an open source

2 days ago
yuva-krishna-memes OP
:c:

https://www.reddit.com/r/ProgrammerHumor/s/OMS8vGbg6z

This meme clarifies it

2 days ago
ElimTheGarak

Yes, but if you actually go to sub reddits specifically about that thing people are usually really nice. Not that I am cool enough to run into problems other people haven't had, but reddit comes up before SO on Google now and the answers are usually better. (Just disagreeing with the position of reddit in the generational trauma chain)

1 day ago
EnergeticElla_4823

When you finally inherit that legacy codebase from a developer who didn't believe in comments.

2 days ago
Just_Information334

// Increment the variable named i
i++; // Use semi colon to end the statement

Here have some comments.

2 days ago
dani_michaels_cospla

If the company wants me to believe in comments, they should pay me and not threaten layoffs in such ways that make me not feel I need to protect my job

2 days ago
TrackLabs

LLMs learning from insightful new data such as

"You're absolutely right!" and "Great point!"

2 days ago
jfcarr

That's why they try to block LLM responses, it pre-cleans and humanizes the data so that they can sell it to third parties for AI training. Cha-ching!!!

2 days ago
Dadaskis

I hope we become one of those programmers that programmed *before* Stack Overflow :)

I know it won't happen, though.

2 days ago
Invisiblecurse

The problem dtarts when LLMs use LLM data for learning

2 days ago
YouDoHaveValue

Synthetic data

1 day ago
AysheDaArtist
:cs:

We've finally hit the ceiling Gentlemen

See you all in another decade when "AI" comes back under a new buzz word

2 days ago
reinfra

missed stackoverflow circle so much even the guys downvoting everything

2 days ago
Gold_Appearance2016

Well, wouldn't this mean we would start having to use stack overflow again? (Or maybe even llms asking each other questions, dead stack overflow theory).

2 days ago
Beneficial_Item_6258

Probably for the best if we want to stay employed

2 days ago
dhlu

Through docs and commits you mean?

1 day ago
YouDoHaveValue

My experience has been it does okay if the library has good documentation.

It does struggle with breaking version changes and deprecated properties... But then don't we all?

1 day ago
Emergency-Author-744

To be fair recent LLM perf improvements have been in large part due to synthetic data generation and data curation. A sign we're progressing in architecture should be the lack of necessity of new data (AlphaGo->AlphaZero). Doesn't make this any less true as a whole though.

2 days ago
XLNBot

How does synthetic data generation work? How is it possible that the output from model A can be used to train a model B so that it is better than A?

2 days ago
Emergency-Author-744

More reasoning-like data where it expands on earlier data. Re-mix and replay. Humans do this as well via imagination e.g. when you learn to ski you're taught to visualize the turn before doing it, or e.g. kids roleplaying all kinds of jobs to gain training data for tasks they can't do as often in real life.

2 days ago
chilfang
:s:

Human filters

2 days ago
XLNBot

Do you mean that humans choose which outputs go into the training pile? Is that basically like some sort of reinforcement learning then?

Or do the humans edit the generated outputs to make them better and then add them to the pile? That way it's basically human output

2 days ago
rover_G
:c::rust::ts::py::r::spring:

The onus will be on language/library/framework authors to provide good documentation that AI can understand.

2 days ago