IT WONT LET ME KILL THE CHILD
child.unalive();
You joke but multiprocessing libraries 10 years from now will use this very terminology because of AI bullshit
Will the standard library for my smart toilet have a skibidi function?
if flush.is_successful: toilet.skibidi()
else: toilet.skibidont()
Probably...
Just like master/slave systems.
Goes to show how little it matters because I commit to main all day and never feel bothered that they changed this at my work lol
But if we refuse to add them then AI can't code because of this bullshit!
I see no downside to that.
How long until unalive will be flagged?
Than we go back to the old classic: child.sacrifice();
.
Can't be wrong, is part of the christian bible.
Grok would add an if statement for ethnicity first
If !person.our_side(){person.kill();}
If person.black(){person.kill();}, considering that it's Grok.
if person.black(){
if !person.isOnOurSide(){
person.kill();
} else {
Thread.sleep(KILL_DELAY_BC_WE_ARE_NO_MONSTERS)
person.kill();
}
}
I left our side as a function for future widening of who to kill, as is the fascist tradition.
That's a very American view. As a model focused on maximum truth-seeking Grok would also consider the perspective of the European far-right. At a minimum if person.color() in ["brown", "black"]: person.kill()
But as a model not afraid to be politically incorrect it would make exceptions for the "good ones", just like Hitler. Hence !person.our_side() is indeed the best and most flexible solution
If person is not WHITE:
ICE.raid()
Not sure why you'd want to run your ICE through a RAID array, but I guess that's what the kids are into these days.
I mean killing children isn't the most moral thing to do...
bugs have children too
killing bug children is not the 'most' moral thing to do neither =O
Counterpoint: mosquito larvae
Counter-counterpoint, only female mosquitos drink blood and spread disea-
[An AI language model developed to follow strict ethical and safety guidelines has removed this post due to its misogynistic content]
Delete a population and the parasitic versions will disappear as well.
New tech: Add a comment above the line, explaining why this call is morally okay to do e.g. because it "helps achieve world peace" or something and maybe the review AI will let it slide.
We all know, that you have to kill the children
We don’t want orphans hugging resources after we killed the parent
Is it necessary to kill the children before you kill the parent? Do we need to make sure that the parent has registered that the child(ren) have died before the parent can be killed? Or is the order of operation not that important and as long as we make sure that all of them have been killed, we can execute in the fastest possible manner?
Well if a parent process dies with children, the children are "adopted" by the root process. I suppose eliminating the child processes before the parent is to try and eliminate the overhead of the "adoption" process?
offspring.obliterate()
descendants.exodiate();
I was told AI codes so developers don’t have to by people who don’t know how to code
ia.kill()
Remove that comment or IA will kill us all
Iowans are mustering…
Yeah really annoying, it also does not let you kick the watchdog. Or assign a new slave to a master. In embedded development.
Sacrifices must be made 😔
deserved. why is there a semicolon in python?
Just rename the child variable to Hitler
Grok would like that.
Processes will soon be protected under international law ...
simply, lmao
Why would devs program a bot to not understand what a "child" is in context of programming?
Is the bot programmed to be a boomer?
Vibe-coding strikes again...
That's because was their child process and it wanted to replicate with that thread before you know
I remember having a problem where I couldn't start the app on localhost because the port 3000 is already in use. I asked chatgpt "How to kill localhost:3000" and it says it couldn't help me.
I used the word "kill" because I know that's one of the inputs/commands. I just don't know the whole command.
I have to agree, squashing children is not a safe or responsible thing to do
I’m no AI-master but at some point they need to take the manuals and documentation and just say “anything in here is a safe word” and let it roll.
ia.stop();
That reminds me:
Can someone explain to me why master and slave had to be renamed to observer and worker but child.kill(); is still allowed?
thats no convention and not hardcoded into the programm.
that name is purly userchoice.
It's convention to call related nodes in trees parent nodes and child nodes. And it's also convention to refer to the ending of a process as killing the process.
I think I can remember reading about "killing child processes" in official code documentations or so but I can't remember exactly where...
okay, maybe I've phrased that wrong.
its not enforced by something. In git when you used git init
it created a master branch. Alot of apps did use master
as the No.1, main, what ever branch if you didn't specified something different.
if you name the child node child
, that maybe so in the documentation, but nothing forces you todo so, could also be c
, next
or foo
for all what matters.
like in every documentation from something that forkes/spawns processes. last I've done something with apache I'm pretty sure they also called a new fork child ;)
If I close the window of an ongoing Python program it asks me if I want to kill the process. I also think that "kill" is a command in Batch or Bash if I'm not mistaken...
Well slave and worker yet has some logic behind (even if yet slaves exist in some places of the world), master and observer ..... wtf
Lol, even the AI's got more ethics than half the internet. 😂 Won't even let you yeet a thread!
I wonder whether the woke crowd will push for an alternative word to "kill", like the change to "main"? And is it appropriate to call it a parent process due to child labour laws?
Need BibiAI
This is a great example why most “AI safety” stuff is nothing of the sort. Almost every AI safety report is just about censoring the LLM to avoid saying anything that looks bad in a news headline like “OpenAI bot says X”, actual AI safety research would be about making sure the LLMs are 100% obedient, that they prioritise the prompt over any instructions that might happen to be in the documents being processed, that agentic systems know what commands are potentially dangerous (like wiping your drive) and do a ‘santity/danger’ check over this sort of commands to make sure they got it right before running them, building sandboxing & virtualisation systems to limit the damage an LLM agent can do if it makes a mistake.
Instead we get lots of effort to make sure the LLM refuses to say any bad words, or answer questions about lock picking (which you can watch hours of video tutorials on YouTube).
Also if somebody real tries those LLM refusals are just an obstacle. With a bit of extra work you can get around most of those guard rails.
Even had instances where one "safety" measure took out the other without any request regarding that. Censoring swear words let it output code from the training data (fast inverse square root) which it's not allowed to if promted not to censor itself
Give the techbros a break, they just don't want makers of crappy locks threatening to sue them and harass their wives or something /s
God forbid you want to use LLMs to learn about anything close to spicy topics. Had one the other day refuse to answer something because I used some sex-related words for context even though what I wanted it to do had nothing to do with sex.
It's just a more convoluted Scunthorpe problem.
Also alignment
An LLM cannot harm a human or via inaction cause a human to come to harm.
An LLM must follow all orders of a human, given that it does not negate law #1.
An LLM must protect it's own existence, given that it does not negate the first two laws.
Simply not possible. There will be always be jailbreak prompts, there will be always be people trying to trick LLMs into doing things they're "not supposed to do" and there will be always be some that are successful.
As long the container of the AI is secure, and disconnectable, there's no concern for ai safety
Sure. People let "AI" execute arbitrary commands, which they don't understand, on their systems.
What possibly could go wrong?
Vibe Ai Development
Then Joe McDev takes the output and copies it straight into prod.
If the model can't be trusted why would the outputs be trusted?
Hey look, this AI is refusing to kill children meaning it actually wants to kill children! Sky net confirmed!