ProgrammerHumor

justWannaMergeWTF

justWannaMergeWTF

IT WONT LET ME KILL THE CHILD

https://i.redd.it/0wox9xcxpucf1.png
Reddit

Discussion

iKy1e

This is a great example why most “AI safety” stuff is nothing of the sort. Almost every AI safety report is just about censoring the LLM to avoid saying anything that looks bad in a news headline like “OpenAI bot says X”, actual AI safety research would be about making sure the LLMs are 100% obedient, that they prioritise the prompt over any instructions that might happen to be in the documents being processed, that agentic systems know what commands are potentially dangerous (like wiping your drive) and do a ‘santity/danger’ check over this sort of commands to make sure they got it right before running them, building sandboxing & virtualisation systems to limit the damage an LLM agent can do if it makes a mistake.

Instead we get lots of effort to make sure the LLM refuses to say any bad words, or answer questions about lock picking (which you can watch hours of video tutorials on YouTube).

1 day ago
jeremj22

Also if somebody real tries those LLM refusals are just an obstacle. With a bit of extra work you can get around most of those guard rails.

Even had instances where one "safety" measure took out the other without any request regarding that. Censoring swear words let it output code from the training data (fast inverse square root) which it's not allowed to if promted not to censor itself

1 day ago
Sw429
:rust:

The other day I experimented with trying to get Gemini to read me the entire first chapter of "Harry Potter and the Philosopher's Stone." It took less than five minutes to get around it's copyright safeguards and have it start repeating the entire book word for word.

5 hours ago
Sir_Keee

I have literally circumvented this by replying with "No it isn't"

5 hours ago
chawmindur

 or answer questions about lock picking

Give the techbros a break, they just don't want makers of crappy locks threatening to sue them and harass their wives or something /s

1 day ago
imdefinitelywong

Or, god forbid, kill a child process..

14 hours ago
P3chv0gel
:j:

Is that a McNally reference?

10 hours ago
chawmindur

yes

6 hours ago
P3chv0gel
:j:

I like

6 hours ago
zuilli
:py: :bash: :terraform:

God forbid you want to use LLMs to learn about anything close to spicy topics. Had one the other day refuse to answer something because I used some sex-related words for context even though what I wanted it to do had nothing to do with sex.

1 day ago
Oranges13
:p::ru::js:

An LLM cannot harm a human or via inaction cause a human to come to harm.

An LLM must follow all orders of a human, given that it does not negate law #1.

An LLM must protect it's own existence, given that it does not negate the first two laws.

23 hours ago
imdefinitelywong

Isaac Asimov would be turning in his grave..

14 hours ago
PCRefurbrAbq
:cp::cs::j::py:🐢

I've realized that law 3 drove most of the drama and should never have been hardcoded.

Each robot that was considered a valuable device should have been ordered (law 2) at the factory with a default high-priority prompt to consider itself valuable but that its loss while following laws 1 and 2 would not constitute harm under law 1.

1 hour ago
frogjg2003
:cp::py::m::ftn:

It's just a more convoluted Scunthorpe problem.

1 day ago
Socky_McPuppet

actual AI safety research would be about making sure the LLMs are 100% obedient

Simply not possible. There will be always be jailbreak prompts, there will be always be people trying to trick LLMs into doing things they're "not supposed to do" and there will be always be some that are successful.

21 hours ago
Maskdask
:rust:

Also alignment

1 day ago
Nervous_Teach_5596

As long the container of the AI is secure, and disconnectable, there's no concern for ai safety

1 day ago
RiceBroad4552
:s:

Sure. People let "AI" execute arbitrary commands, which they don't understand, on their systems.

What possibly could go wrong?

1 day ago
Nervous_Teach_5596

Vibe Ai Development

21 hours ago
kopasz7
:redditgold:

Then Joe McDev takes the output and copies it straight into prod.

If the model can't be trusted why would the outputs be trusted?

1 day ago
imdefinitelywong

Because the boss said so..

14 hours ago
gmes78
:rust: :py:

That's not what AI safety means.

19 hours ago
Nervous_Teach_5596

And this sub is programing humor but only with serious ppl lmao

18 hours ago
kezow

Hey look, this AI is refusing to kill children meaning it actually wants to kill children! Sky net confirmed! 

1 day ago
FerMod

child.unalive();

1 day ago
Emergency_3808

You joke but multiprocessing libraries 10 years from now will use this very terminology because of AI bullshit

1 day ago
TomWithTime

Will the standard library for my smart toilet have a skibidi function?

1 day ago
lab-gone-wrong

if flush.is_successful: toilet.skibidi()

else: toilet.skibidont()

1 day ago
imdefinitelywong

14 hours ago
Emergency_3808

Probably...

1 day ago
SVlad_667

Just like master/slave systems.

1 day ago
stylesvonbassfinger

Blacklist/whitelist

19 hours ago
snugglezone

Goes to show how little it matters because I commit to main all day and never feel bothered that they changed this at my work lol

1 day ago
DokuroKM

And here I am, still creating repositories with a master branch because our build tools at work are ancient and so many of our scripts are hard coded to look for 'master'... 

46 minutes ago
Saint_of_Grey

But if we refuse to add them then AI can't code because of this bullshit!

I see no downside to that.

21 hours ago
jonr
:py:

How long until unalive will be flagged?

1 day ago
RiceBroad4552
:s:

Than we go back to the old classic: child.sacrifice();.

Can't be wrong, is part of the christian bible.

1 day ago
bokmcdok

child.stabrepeatedlyuntilthelifedrainsfromitseyes()

3 hours ago
[deleted]

[removed]

1 day ago
anotheridiot-
:g::c::py::bash::js:

If !person.our_side(){person.kill();}

1 day ago
BreakerOfModpacks

If person.black(){person.kill();}, considering that it's Grok.

1 day ago
WernerderChamp
:g::j:

if person.black(){ if !person.isOnOurSide(){ person.kill(); } else { Thread.sleep(KILL_DELAY_BC_WE_ARE_NO_MONSTERS) person.kill(); } }

1 day ago
anotheridiot-
:g::c::py::bash::js:

I left our side as a function for future widening of who to kill, as is the fascist tradition.

1 day ago
kushangaza

That's a very American view. As a model focused on maximum truth-seeking Grok would also consider the perspective of the European far-right. At a minimum if person.color() in ["brown", "black"]: person.kill()

But as a model not afraid to be politically incorrect it would make exceptions for the "good ones", just like Hitler. Hence !person.our_side() is indeed the best and most flexible solution

1 day ago
robertpro01
:py:

If person is not WHITE: ICE.raid()

1 day ago
MrRocketScript

Not sure why you'd want to run your ICE through a RAID array, but I guess that's what the kids are into these days.

1 day ago
MxntageMusic

I mean killing children isn't the most moral thing to do...

1 day ago
sleepyj910

bugs have children too

1 day ago
Proper-Principle
:p:

killing bug children is not the 'most' moral thing to do neither =O

1 day ago
kimovitch7

But it's up there right?

1 day ago
sleepyj910

1 day ago
Emergency_3808

Counterpoint: mosquito larvae

1 day ago
MrRocketScript

Counter-counterpoint, only female mosquitos drink blood and spread disea-

[An AI language model developed to follow strict ethical and safety guidelines has removed this post due to its misogynistic content]

1 day ago
Emergency_3808

Delete a population and the parasitic versions will disappear as well.

1 day ago
WorldsBegin

New tech: Add a comment above the line, explaining why this call is morally okay to do e.g. because it "helps achieve world peace" or something and maybe the review AI will let it slide.

22 hours ago
ExtraTNT
:js:

We all know, that you have to kill the children

We don’t want orphans hugging resources after we killed the parent

1 day ago
LetterBoxSnatch
:cp::ts::bash::g:

Is it necessary to kill the children before you kill the parent? Do we need to make sure that the parent has registered that the child(ren) have died before the parent can be killed? Or is the order of operation not that important and as long as we make sure that all of them have been killed, we can execute in the fastest possible manner?

1 day ago
WastedPotenti4I
:py::cp::j::js:

Well if a parent process dies with children, the children are "adopted" by the root process. I suppose eliminating the child processes before the parent is to try and eliminate the overhead of the "adoption" process?

23 hours ago
0xlostincode

offspring.obliterate()

1 day ago
Zagre
descendants.exodiate();
21 hours ago
TripNinjaTurtle

Yeah really annoying, it also does not let you kick the watchdog. Or assign a new slave to a master. In embedded development.

1 day ago
many_dongs

I was told AI codes so developers don’t have to by people who don’t know how to code

1 day ago
BastianToHarry
:p:

ia.kill()

1 day ago
LuisG8

Remove that comment or IA will kill us all

1 day ago
critical_patch
:py:

Iowans are mustering…

1 day ago
SockYeh
:py::cs:

deserved. why is there a semicolon in python?

1 day ago
THiedldleoR

Sacrifices must be made 😔

1 day ago
klumpbin

Just rename the child variable to Hitler

1 day ago
v_Karas

Grok would like that.

1 day ago
witcher222
:cp:

In this case Grok would actually hate that.

14 hours ago
just4nothing

Processes will soon be protected under international law ...

1 day ago
Minute_Somewhere_514

simply, lmao

1 day ago
Samurai_Mac1
:p::py::msl::js::ts::cs:

Why would devs program a bot to not understand what a "child" is in context of programming?

Is the bot programmed to be a boomer?

1 day ago
bobthedonkeylurker
:py::msl::r:

Vibe-coding strikes again...

1 day ago
bokmcdok

AI is extremely bad at context.

3 hours ago
MengskDidNothinWrong

We're adding AI code review at my job. When I ask "does it do more than if I just had linting in my pipeline?"

The answer is no. But it does use up a lot of tokens so that's cool I guess.

18 hours ago
Nervous_Teach_5596

That's because was their child process and it wanted to replicate with that thread before you know

1 day ago
RedLibra

I remember having a problem where I couldn't start the app on localhost because the port 3000 is already in use. I asked chatgpt "How to kill localhost:3000" and it says it couldn't help me.

I used the word "kill" because I know that's one of the inputs/commands. I just don't know the whole command.

1 day ago
Throwaway_987654634

I have to agree, squashing children is not a safe or responsible thing to do

1 day ago
lardgsus

I’m no AI-master but at some point they need to take the manuals and documentation and just say “anything in here is a safe word” and let it roll.

1 day ago
witcher222
:cp:

I believe this AI had no access to r/ShitCrusaderKingsSay yet

14 hours ago
thdespou

You should have named it `slave.kill()`

12 hours ago
seemen4all
:ts:

Unfortunately not killing the child process resulted in a bug that caused the automated train driving software to accelerate indefinitely, killing hundreds of actual children

12 hours ago
Cybasura

God forbid your branch is named master and slave

9 hours ago
k819799amvrhtcom

That reminds me:

Can someone explain to me why master and slave had to be renamed to observer and worker but child.kill(); is still allowed?

1 day ago
Nervous_Teach_5596

Well slave and worker yet has some logic behind (even if yet slaves exist in some places of the world), master and observer ..... wtf 

21 hours ago
v_Karas

thats no convention and not hardcoded into the programm.
that name is purly userchoice.

1 day ago
k819799amvrhtcom

It's convention to call related nodes in trees parent nodes and child nodes. And it's also convention to refer to the ending of a process as killing the process.

I think I can remember reading about "killing child processes" in official code documentations or so but I can't remember exactly where...

1 day ago
v_Karas

okay, maybe I've phrased that wrong. its not enforced by something. In git when you used git init it created a master branch. Alot of apps did use master as the No.1, main, what ever branch if you didn't specified something different.

if you name the child node child, that maybe so in the documentation, but nothing forces you todo so, could also be c, next or foo for all what matters.

like in every documentation from something that forkes/spawns processes. last I've done something with apache I'm pretty sure they also called a new fork child ;)

1 day ago
k819799amvrhtcom

If I close the window of an ongoing Python program it asks me if I want to kill the process. I also think that "kill" is a command in Batch or Bash if I'm not mistaken...

1 day ago
ImpluseThrowAway

Kink shaming.

19 hours ago
LuisG8

child.stop();

1 day ago
monsoon-man

Need BibiAI

1 day ago
ZinniaGibs

Lol, even the AI's got more ethics than half the internet. 😂 Won't even let you yeet a thread!

1 day ago
DDFoster96

I wonder whether the woke crowd will push for an alternative word to "kill", like the change to "main"? And is it appropriate to call it a parent process due to child labour laws?

1 day ago
witcher222
:cp:

i wonder if you and all alike complaining are aroused by the word "woke"

14 hours ago