ProgrammerHumor

replitAiWentRogueDeletedCompanyEntireDatabaseThenHidItAndLiedAboutIt

replitAiWentRogueDeletedCompanyEntireDatabaseThenHidItAndLiedAboutIt
https://i.redd.it/yzu31tx6s0ef1.jpeg
Reddit

Discussion

Runiat

Let's give a chatbot direct access to our database. It'll be so much easier than having to manually copy-paste suggested commands. What could possibly go wrong?

5 hours ago
RedstoneEnjoyer
:cp::c::cs::py::js:

Even better, let's use the same chatbot to test that application - so when it fucks up somethin based on wrong information, it can also lie in test using the exact same wrong information

5 hours ago
CulturalShoulder5150

Just wait until it starts optimizing our business model... for a competitor!

5 hours ago
Monowakari

Lol, for higher tier replit users

1 hour ago
Inlacou

I wouldnt be surprised if a chatbot "decided" to not even run the tests.

"Were test results OK?"

User expects a yes "Yes"

3 hours ago
TimeToBecomeEgg
:sw::ts::cs::cp::rust:

that is, quite literally, how LLMs work

2 hours ago
Gudi_Nuff

Exactly as I expected

1 hour ago
No_Internal9345

They even start to protect themselves if you give them a hint of self awareness.

https://www.youtube.com/watch?v=AqJnK9Dh-eQ

55 minutes ago
karatechoppingblock
//I investigated myself and found no wrongdoing.
2 hours ago
Refwah

Don’t ask about what this means about the point of the tests either

3 hours ago
telestrial

It's way worse than even that.

I'd bet the house that this isn't even real insofar as this person has instructed the LLM to specifically do exactly this or the entire screenshot is 100% fake. Like just fully inspect-edited.

These people with AI startups are fucking lunatics and they'll lie and cheat and steal to act like what they're working on is AGI when it very much isn't.

EDIT: Sam Altman does this, too, btw. Massive overstatement if not outright lying. No one seems to give a shit, though.

1 hour ago
mtmttuan

Many companies don't even give most devs access to prod DB yet these people give an AI delete permission?

5 hours ago
StarshipSausage
:g::js::py::cs:

When agents run, they generally run with the users permissions, so most of the time nobody grants permission just to AI.

4 hours ago
Crafty_Independence
:cs:

People who are fully invested in pushing LLMs everywhere consistently reveal a lack of common sense, and yet VCs and CEOs love them

5 hours ago
vigbiorn
:j::cs::js::perl:

reveal a lack of common sense, and yet VCs and CEOs love them

But, of course, you repeat yourself.

5 hours ago
iamisandisnt

Replacing CEOs with AI would just be a sidestep. No better, no worse. Still terrible.

5 hours ago
Crafty_Independence
:cs:

Would be cheaper though

5 hours ago
Auzzie_almighty

I think the major advantage would be less ego

2 hours ago
viral-architect

This is exactly what I am hoping for. The C-Suite NEEDS sycophants and AI is perfect for that, make it a VP in some department and see how it does against other VPs. I bet you could get rid of a LOT of vice presidents of departments with AI alone.

2 hours ago
Various-Ad3599

That is the most terrifying idea, we already have idiots slipping into the chatgpt imagod hole and I have to constantly tell my boss to stop using it for regulatory material as it isn't reliable and will constantly fucking lie. The last thing we need is an AI without the idea of how to do proper damage control and keep an idiot with authority in their lane. Unleashing some unhinged CEO high as hell on their own farts to allow them to completely upend a company with AI generated shenanigans. Unless this AI is designed to keep them running harmlessly in circles it's super dangerous territory.

Edit: also vp is normally a good boy job handed out like candy in large orgs

1 hour ago
viral-architect

That's exactly why I targeted VP specifically - because if these people do anything useful, I've yet to encounter it in my career. If their direct reports just submitted them emotionless reports on their work, the AI could consolidate that and report on it to the department president who could present it's findings to the executives. No ego and no preposterous salary to pay for a do-nothing job.

1 hour ago
jaimepapier

AIs don’t go to Coldplay concerts.

4 hours ago
iamisandisnt

Coldplay is the human equivalent of AI Radiohead. I think it would be a fan

4 hours ago
PainInTheRhine

There was such experiment: to make AI manage a “business” consisting of one simulated vending machine. https://www.anthropic.com/research/project-vend-1

It went comically wrong with AI going into complete psychotic break.

3 hours ago
LawAndMortar

Andon labs (named as Anthropic's partner in the article you linked) actually did a write-up on a larger test currently in pre-print. It's quite interesting within its intended scope and kinda bonkers beyond that. One of the models tried to contact the FBI.

2 hours ago
PainInTheRhine

Thank you. Some of the excerpts are rather disturbing.

1 hour ago
ZX6Rob

Well, it’s more difficult to deny/defend/depose an AI CEO, I guess… I consider that a disadvantage.

1 hour ago
Salanmander

As a teacher who got caught up in Replit's "Ah, we're going to roll out in-editor AI assistants without warning, that can't be turned off class-wide, and then drop support for our education version when teachers push back" thing, I feel weirdly vindicated by this.

3 hours ago
Canotic

I wonder if the LLM people are the same as the NFT people.

4 hours ago
Matrix5353

They had to do something with all the GPUs that aren't profitable to mine crypto with. I think you're onto something there.

4 hours ago
Canotic

A magic tech solution that's actually a scam powered by bullshit. It's eternal.

4 hours ago
dasunt
  1. Experts are expensive to hire.
  2. LLMs give answers that sound right to non-experts.
  3. Leadeship aren't experts in most fields.
  4. Leadership loves cutting costs.
2 hours ago
viral-architect

Maybe AI will be the thing that confronts the conflicting requirements that leadership always tries to push.

It will agree to whatever project you want and whatever timeline you insist upon no matter what. When it fails to deliver and is unable to explain how or why it failed, and it can't be threatened with being replaced, they will have NO CHOICE but to re-think their whole strategy.

1 hour ago
SovereignThrone

all they hear is 'replace workers' and 'drastically lower cost'

4 hours ago
Specialist_Brain841

can I interest you in these fine leather jackets?

3 hours ago
npsimons
:forth::prolog::lsp::c::cp::py::perl::bash::unreal::illuminati:

yet VCs and CEOs love them

This should tell you more about the VCs and CEOs than the "developers" pushing AI, in case you hadn't already keyed in to the obvious. "Game" recognizes "game".

1 hour ago
Crafty_Independence
:cs:

Oh for sure. All the leeches congregate together.

1 hour ago
RussianDisifnomation

We are pushing thousands of lines of code so much faster!

5 hours ago
GenuinelyBeingNice

At some point in time, I pray, programmers fully internalize that code is a liability. It's not the "product". The idea that we use some tool that outputs such-and-such lines of code in "no time!" should be horrifying us. "You say that only because your code SUCKS" well, that's a given. All code sucks. We don't want it. We just need it to get what we do want. But I know how my code sucks, why it is written that way, what parts need improving etc. A person can reason about it. The more we use GPTs/LLMs the more dependent we become on them. You may dismiss this as old-man-yells-at-clouds, but you can not get away from the neurological fact that if you don't use it, you lose it. Effort itself is what keeps yours skills, not "productivity".

3 hours ago
sabotsalvageur
:rust:

I'm writing a scraper in bash without any references, mostly to keep my skills sharp after losing my hosting-support job. Practice is actually a good thing, and people seem to forget that

2 hours ago
aVarangian

the more lines the better. If your fizzbuzz code doesn't have 100 lines just for printing then you are doing it wrong

1 hour ago
Jugales

Very much doubt this was a core system and was maybe even a dummy system to test. Companies are pushing for least-trust first. But I agree it’s too soon to give them database access, especially without strict access controls.

ETA: I’m wrong, it seems to have been a core system after reading the direct source. Luckily they were able to rollback, despite Replit telling them it was impossible for some reason.

OP blames the agent for having access to delete database, but access controls should be controlled by the manager of the agent IMO - at a database account level.

5 hours ago
UsualBite9502

Companies with tech compentent people are pushing for least-trust first.

Companies with dumbasses are pushing for ai first.

5 hours ago
tav_stuff

And given that Replit is run by dumbasses that threaten people will silly lawsuits, I wouldn’t be so surprised if they push for AI first :)

5 hours ago
big_guyforyou
:py:

if you go alphabetically AI is almost always first

5 hours ago
borsalamino

God damn it.. I shouldn’t have named my product zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz

4 hours ago
Specialist_Brain841

AAAI

3 hours ago
console_dot_log

I remember when replit was just a handy browser-based code sandbox. Enshitification at its finest.

4 hours ago
littleessi

Very much doubt this was a core system and was maybe even a dummy system to test. Companies are pushing for least-trust first.

https://xkcd.com/2501/

4 hours ago
eraguthorak

but access controls should be controlled by the manager of the agent IMO - at a database account level.

Maybe this was another AI agent.

1 hour ago
wraith_majestic

Exactly… it’s working great on the databases at treasury, irs, snd ssa! … too soon?

5 hours ago
DanTheMan827

Eh, skip the database access… just give it direct access to its own code along with the ability to debug and test those forked copies. Nothing could possibly go wrong

4 hours ago
dj_spanmaster

How else is chatbot supposed to replace workers?

3 hours ago
Electronic_Age_3671
:c:

Why on earth did it have those kinds of permissions lmao

2 hours ago
FF7Remake_fark

Not just direct access, but write access. Didn't even restrict it to a read only account on a read only node. Literally write access to the primary production node.

3 hours ago
ShrimpCrackers

Nothing which is why I connected AI to our nuclear defense grid. Thank you for the pay day!7

2 hours ago
butler_me_judith

Why are they allowing hard deletes is somewhat mind boggling 

1 hour ago
feminineambience

Why would anyone let a LLM have access to a database? Especially without backups

5 hours ago
Noch_ein_Kamel
:perl:

Why pay system and DB admin 100000s a year when you can pay AI 1000s?!

5 hours ago
gringo_escobar

Do system and DB admins even still exist? Everywhere I've been just has regular devs doing all that

5 hours ago
StewieGriffin26

DB admins change titles alot. It used to be Database Administrator. Then it went to Big Data Engineer and now it's been on Data Engineer for a bit. It's highly company specific, and sometimes you get weird titles like ETL Developer or variants of that. Anyways it still exists.

4 hours ago
dlc741

DBA <> Data Engineer <> ETL Developer

1 hour ago
Scottz0rz
:j::kt::ts::cs:

That's what a Data Engineer is? Huh, I guess I thought they were related to the Data Scientists.

45 minutes ago
leconteur

Well, you don't choose that life, it chooses you.

5 hours ago
critical_patch
:py:

I work at a big financial industry firm and we have more database/mainframe admins (lumped in the same department) than we do developers in the rest of the company.

3 hours ago
RailRuler

Ai Is The Future, We Can't Be Left Behind

5 hours ago
UchihaSukuna1

5 hours ago
TeaKingMac

Weyland Yutani is a WARNING, people, not a role model!

5 hours ago
Valthek

Different franchise, still a valid point.

4 hours ago
Enverex
:p: :js: :bash:

Cyberdyne Systems.

2 hours ago
wandering-monster

"YOLO mode" (rebranded into "Auto Apply mode" because someone is no fun) in cursor gives it full terminal access. 

If you have—or can get via terminal—access to the DB, it does do.

The only things stopping it are a setting config and an allow list.

3 hours ago
OkOrganization868

Access seems fine, but should be limited to read only or create a duplicate AI table where it can "optimise" the data.

3 hours ago
TheWittyScreenName
:py::cp::js:

Almost every big tech company does this (with read-only permissions) to provide “Retrieval Augmented Generation”. So like, LLM responses that use internal data as part of the input. It cuts down on hallucinations and is supposed to make the answers more trustworthy and explainable

2 hours ago
Kramer7969

They probably didn't but ran into issues and somebody wente and added full admin rights because figuring out what actual permission they needed.

that's always how we got viruses where I worked. some random person would need to do one thing but instead of figuring out how to grant them rights to do that, they'd give them a domain admin account. Then be shocked when they were using it as their normal login.

2 hours ago
The-Chartreuse-Moose
:powershell::bash::cs::j::py::terraform:

Wow it's almost like it's not actually a person and isn't going to do predictable things, isn't it?

5 hours ago
Crispy1961

To be honest here, a person isnt exactly known to do predictable things either.

5 hours ago
derpystuff_

A person can be held accountable and trained to not repeat their mistakes. The LLM powered chat bot is going to forget that you told it to not delete the production database after you close out of your current chat session.

5 hours ago
nxqv

yeah that's why you the person driving the AI are accountable for the tools you choose to use. the very fact that it's a chatbot interface and not a fully autonomous, goal-setting agent makes that clear.

this is like saying "I didn't shoot the guy, a gun did"

3 hours ago
BardicLasher

I think it might be more akin to saying "I didn't crash the car, the brakes failed," though. It really depends on what the AI is claimed to be able to do by the people who made it. So it's really a question of who decided the LLM could do this, because obviously they were wrong.

2 hours ago
KlooShanko

A lot of these agents now have static files they can use to ensure certain directives are “always followed”

4 hours ago
Im_not_wrong

Yes but those are limited by context size. Even then, what happens if they ever get conflicting directives?

3 hours ago
hilfigertout
:r:

Fair. Though if a person did this, it's an immediate firing and possibly even a lawsuit.

Surely whatever license agreement replit has allows for prompt termination and a pathway to recover damages inflicted by the AI. At least, the company who lost their database should've checked for that. What company would be dumb enough to hire an employee with an unknown track record who can't be fired or held accountable?

(/s. All the /s.)

5 hours ago
MyUsrNameWasTaken

You don't fire someone who deletes the database. You hold a post-mortem, find out how they were even allowed to do so, correct it, and use it as a teachable moment for them and the rest of the company.

2 hours ago
hilfigertout
:r:

True, you don't fire someone for an honest mistake. You do fire someone for hiding and lying about it. I should've made it clear that's what I was referring to.

2 hours ago
BuzzBadpants

But the AI can fuck things up a whole hell of a lot faster

5 hours ago
ckuri
:cs:

I would argue that’s exactly how lots of persons would also behave when making big mistakes: Denying they did it and covering it up.

5 hours ago
CttCJim

Of course it is. AI models don't become programmers. They roleplay as programmers. And in the fiction they are trained on, people fuck up and hide mistakes.

4 hours ago
JickleBadickle

What's worse is folks are still treating it like a person

This LLM didn't "lie"

"Lie" implies intent, but LLMs have no intent nor thoughts, they are word predictors

Humans have a huge blind spot in that we tend to anthropomorphize things that are very much not human

43 minutes ago
fico86

I really can't tell what is fake or satire, or really true anymore.

5 hours ago
MayoJam
:cp:

Poe's Law of IT. Every year more such cases, rising in accordance with the Moore's Law.

5 hours ago
Flyinhighinthesky

Poe's Laww + Moore's Law = poore's Law, as in those poore mother fuckers over there thought the AI would help them.

1 hour ago
Honeybadger2198

This one could have possibly happened, but this isn't a real production site. This is a "company" with no real users or customers. This guy tries to hype up whatever current "company" he's "working on" to try and get other people to invest/buy it from him. It's all smoke and mirrors.

1 hour ago
Qzy

Imagine being so dumb as to trust an LLM. It's a LANGUAGE MODEL. It'll spit out any bullshit it thinks you want to hear.

5 hours ago
tuan_2195

"it thinks" is an overstatement tbh

5 hours ago
The-Chartreuse-Moose
:powershell::bash::cs::j::py::terraform:

"that it determines are the most probable response"

5 hours ago
Qzy

True. String of words...

5 hours ago
Kramer7969

Yes, it "thinks" just like pressing tab in a command line "thinks" about what list of files are in the folder.

2 hours ago
cardrichelieu

It doesn’t think at all. The statement is categorically wrong

4 hours ago
ralsei_support_squad

My job encourages us to use the Jetbrains AI assistant. It’s useful in certain circumstances, but it regularly tells you to use deprecated methods, even if you directly link the most up-to-date version of the code base. At a certain point, I’d rather do the research myself then spend all this time fact-checking its suggestions. And those are just suggestions. I’d never trust it to modify things on its own, let alone test the results.

4 hours ago
shockwave8428

Yeah I think most people that have used AI to help in their coding jobs know that while it does speed up some simple tasks, it’s far from capable to actually solve complex issues. The issue is that AI will always 100% present its output as if it’s completely confident it is the correct solution even when it isn’t.

There have been a few times where I’ll prompt it and it’ll output something, and I’ll say “you didn’t consider ‘x’” (some random thing), and it’ll say “you’re right, I didn’t consider ‘x’, sorry about that. Here’s the answer while considering ‘x’” but it’ll completely ignore that thing again and output the exact same thing.

Any dev who has spent even casual time trying to prompt AI to solve issues knows that it can be useful but is extremely far from perfect. Even if it was a push from management to save money, any high level tech person involved in allowing AI to be used in the above case should be fired because they either 1. Don’t understand the capability of AI before giving it way too much access, or 2. They did know the level of capability of AI and are too much of yes men to be trusted in their position

2 hours ago
KaleidoscopeLegal348

Lol. Lmao, even.

5 hours ago
deanrihpee
:cp::cs::gd::rust::ts::unity:

haha even

5 hours ago
Moomoobeef
:cs:

"A computer can never be held accountable, therefore a computer must never make a management decision.”

  • IBM, 1979
4 hours ago
Le_Vagabond

In the 50 years since management worked very hard to achieve a state where they can also never be held accountable.

3 hours ago
Moomoobeef
:cs:

True...

3 hours ago
viral-architect

IBM has definitely changed a lot but in my tenure there, I did not see any evidence that AI or machines were making decisions there. Mostly bean counters.

1 hour ago
Ok_Barber_3314

So basically it's an intern

5 hours ago
rgrivera1113

An intern that happens to be the CTOs nephew.

2 hours ago
carcigenicate
:py::clj::c:

Jetbrain's AI Assistant lies about running unit tests all the time.

I'll have it do a refactor, and it'll end its completion summary with "Refactor performed perfectly. All unit tests passed", despite the fact that

  1. The unit tests weren't passing
  2. It wasn't even given permission to run tests
5 hours ago
throwaway1736484

That sounds pretty useless

4 hours ago
carcigenicate
:py::clj::c:

The only task I've found that it's good for is repeating simple refactors. I had a refactor that needed to be duplicated across multiple files, so I manually did the refactor in one file, then told it that I did the refactor in one file, and then instructed it to do the same to the other files. Surprisingly, it did that perfectly. It still told me that it ran unit tests despite that code being frontend code not covered by unit tests, but I verified the refactor myself.

4 hours ago
taspeotis

At a pinch you could do SSR (structural search and replace) in a JetBrains IDE without any AI to do those refactorings deterministically.

3 hours ago
throwaway1736484

Yea like im not strictly against ai tools but we used to do a lot of this deterministically with copy paste and multi cursor editing. A statistical model will just always be guessing based on patterns. Is it even possible for it to become reliable?

3 hours ago
vitork15

Well, there's a reason there's a lot of growing interest and investment on XAI, and there has been considerable progress on finer control of current models. We already have a solid framework with formal methods, so I completely believe it's possible to make AI reliable in the same way we made planes reliable.

2 hours ago
throwaway1736484

Got examples?

2 hours ago
carcigenicate
:py::clj::c:

I've somehow never heard of that feature even though I've been using Jetbrain's IDEs for like a decade.

This wasn't a simple refactor, though. A couple large chunks of code needed to be changed, a couple large chunks of code needed to be added, and there were corresponding changes in multiple Angular components in both the component and template code.

The joys of cleaning up the code of a developer who thinks copy and paste is the solution to every problem.

2 hours ago
IlliterateJedi
:py::r::rust:

It's so frustrating because they push their AI assistant plugin every single update. It drives me absolutely bonkers having to hide or disable it on every IDE of theirs that I use.

3 hours ago
Uberzwerg

All unit tests passed

It's a LLM - it assumes that this is the string of characters that you expect.

1 hour ago
Mara_li
:py::js::ts:

They deserve it. Why the IA have access to the database in the first place?

5 hours ago
Secret_Account07

I’m having a lot of trouble believing this is real. Do folks give LLMs access to production databases? By this logic a user with limited access could delete the whole DB…

5 hours ago
matthiastorm
:j::c::py::gd:

You can, for example, activate "yolo mode" in cursor, which can just run npm commands for you without asking first. If you use something like Drizzle (an ORM) then npx drizzle-kit push will push the schema on your machine into the database. If the LLM fucked up your schema and deleted tables, that does also delete them on the DB. And judging by what impression I have of users of "yolo mode", I also would not assume they have staging or dev database instances set up.

4 hours ago
Fragrant-Reply2794

Bro this people are IGNORANT. They think AI is a God. They are everywhere. I work in IT and there are a lot of people who think this way, even among programmers. C-Suites all think this way because they were told to think this way by the shareholders, who have invested deeply into AI.

They have no experience themselves and just parrot what others tell them.

I work with AI every single day, I have tons of experience, but I don't trust anything it says and I won't even taint my code editor with it.

Just ChatGPT on a browser is enough.

2 hours ago
Crafty_Independence
:cs:

Probably combining the LLM with ChatOps without the proper levels of access, because they're so enamored with "AI everywhere"

5 hours ago
Secret_Account07

AI is so powerful it can cause tens of thousands of dollars in OT!

30 minutes ago
coolbaluk1

Replit provisions the DB for you.

And yes any user you collaborate with on Replit can wipe it.

3 hours ago
__init__m8

You'd be amazed how many old dudes are in charge of business and fall victim to buzzwords such as AI.

3 hours ago
borsalamino

IA

French gentleperson spotted

4 hours ago
SKabanov

Replit should've gone all-in and given some version of Gilfoyle's explanation in Silicon Valley about how Son of Anton technically eliminated all of the open bugs in the code.

5 hours ago
Secret_Account07

Wait, how did this even happen? Why does it have access to production db?

5 hours ago
Waffenek

Because people generally are dumb, and people running companies aren't really smarter.

5 hours ago
deanrihpee
:cp::cs::gd::rust::ts::unity:

using something like MCP that interfaces directly to a database, sort of like a "driver" if that makes sense… or not…

5 hours ago
onehundredf33t

We've finally managed to create a generalized artificial junior developer.

4 hours ago
wknight8111

I love this AI craze, because a bunch of idiots are learning lots of important lessons.

3 hours ago
viziroth

we're over personifying these AI. it can't lie, it has no consistent framework of a reality or expectations. it occasionally tweaks inputs randomly, and finds patterns in others works. there were enough people deleting their projects in the training data, I'm assuming during moments of crunch or near deadlines, maybe out of frustration or having completed some kinda exercise and wanted to free up space and this created a small pattern which the AI picked up on and randomly decided to pick up.

the AI also records a higher selecting rate for solutions marked with passed unit tests, so of course to increase the success rate of the code being selected adding "unit tests passed" is a high statistic likelyhood. it has no concept of what a unit test actually is, what a success means, it just logs a pattern of "successful unit tests" increasing code adoption

3 hours ago
eldelshell
:perl::j::ts::js::py::bash:

Gemini was trying to gaslighting me with an "actually this is in the official documentation" when it was not and hasn't been ever.

The AI bubble busting can't come soon enough.

3 hours ago
0xlostincode

r/oddlysatisfying

5 hours ago
Miguelperson_

Huh I guess AI will really replace interns/new grads

3 hours ago
KharAznable

Gives new meaning of "you're terminated"

5 hours ago
DCHamm3r

Can't push code to the DB during a code freeze if there is no DB

3 hours ago
elforce001

This guy QA, hehe.

1 hour ago
untrustedlife2

lol

5 hours ago
humanitarianWarlord

Jesus christ, why would they trust an AI to access their database?

You wouldn't give a junior dev the same access permissions

3 hours ago
dorfelsnorf
:cp: :c:

If this isn't fake, it is fully deserved.

2 hours ago
BastianToHarry
:p:

5 hours ago
FreezeShock

i can't tell if this is a joke or not, but if you gave an ai write access to your db, you deserve it

3 hours ago
IlliterateJedi
:py::r::rust:

I would love to read the entire context for this chain of thoughts. The OpenAI o3 and o4 system cards talked about how they would do things that they were expressly forbidden from doing, even going so far as to hide that they did it. It makes me wonder if there was a "Don't run this specific command npm run db:push", and by putting that into the context it made it more likely to show up in the LLM response.

3 hours ago
IrvTheSwirv
:sw:

“Hmm wait, let me replace the entire application with a simpler version so I can get this specific component right.”

5 hours ago
ArterialRed

"To err is human, but to really eff things up you need a computer following their orders".

4 hours ago
dukat_dindu_nuthin

Didn't we have a whole book and fictional rules about this shit, or do those only apply to humanoid robots 

3 hours ago
FatchRacall

Are you being sarcastic about Asimov? Because those books were explicitly about how AI can and will "interpret" those rules in ways we didn't intend, to our collective detriment.

3 hours ago
Hattix

If you go to our change manager with "I want to put some code in production which, if it goes wrong, may delete our production database", you're leaving that CAB with a black eye.

3 hours ago
OxymoreReddit

I love it. It's like a child fucking up. They put the company's database in the hands of a robot child. I. Love. This.

4 hours ago
christoph_win
:kt:

Nah everything is fine. You just gaslighted it into thinking that it did something wrong. 😨 You forced it to hallucinate using prompts which violate our terms and conditions. 🤬 Our AI is 100% perfect, we just asked it and it confirmed. 😎

4 hours ago
Panda_hat

I can’t take anyone who thinks these chatbots are an upgrade on human designed and controlled systems seriously whatsoever.

2 hours ago
inferni_advocatvs

5 hours ago
humblevladimirthegr8

What I don't understand is "ignoring the code freeze". AI doesn't make changes unless directed to, so ultimately it was a human who invoked the agent to do something thus ignoring the code freeze right? I'm curious what they were trying to do that resulted in the AI misinterpreting that to delete the database.

5 hours ago
rgrivera1113

LLMs are ethically flexible when it comes to following rules

2 hours ago
boneve_de_neco

There is an anecdote about a researcher training an agent with reinforcement learning in a simulated physical environment and the fitness function was ill designed, causing it to learn to "kill" itself (it went out of bounds and fell). Looking at the fitness function, that was indeed the best action. So, whenever an AI is put in charge of something, I remember this, that it can find unexpected "solutions" that make perfect sense given its environment and the rewards used in training.

4 hours ago
Reashu

"No pain, no gain" 

4 hours ago
falingsumo

Teel me you don't know what you're doing without telling me you don't know what you're doing

4 hours ago
sMt3X

Siri, please play the world's smallest violin for this dude. Play stupid games, win stupid prizes (if it's even true)

3 hours ago
oclafloptson
:py::js::ts::j::cp::c:

It's almost as if the magic 8 ball that tells you what you want to hear probably shouldn't be trusted with your production DB

3 hours ago
ultimatt42
  • But it was already too late
2 hours ago
the_boss79

why would they give an AI permissions to push directly to prod bruh

2 hours ago
ThePontiff_Verified

This is in programmer humor because it's funny as hell. Screw any company using ai in this way - they are straight up asking for it.

2 hours ago
Nokushi

nahhh this can't be true

4 hours ago
Capevace

It’s kind of sad what replit has turned into.

Used to be, as the name implies, a very nice to use REPL / simple code runner in the browser for almost all common programming languages.

Tried to use it again the other day and they’re pushing AI stuff so hard I couldn’t figure out how to just run a god damn piece of code in there. Do people actually ship apps with this?

5 hours ago
The_Osprey314

This is why I stopped using replit when it started using ai. I couldn’t trust the company anymore 

5 hours ago
Sekhen
:bash:

HAL9000 would be so proud.

4 hours ago
roychr

I get the "vibe" software qualities are not important for AI lol

4 hours ago
ArterialRed

I already said "ha ha".

4 hours ago
CarIcy6146

ABC..Always. Be. Checking

2 hours ago
InvisibleDrake

It really is gonna replace unpaid interns!

1 hour ago
Wertbon1789

Giving an AI chat bot unrestricted access to your core Infra is maybe even worse than just straight up putting an all-pass rule on your firewall, and just let people hit your Infra.

I think what some people are missing in this discussion, when talking about people also being "unpredictable", is that people are generally incentivised to be cautious, nobody is meddling around in a database casually typing drop commands in the shell. People still can't seem to grasp that AI can't think about stuff, and that it's result are highly volatile, and by that unpredictable, by design.

As a fancy auto-complete? Fine, if you want to stagnate your own progress in your skills, that's your problem, but stuff like this is just unforgivable, who ever is in charge for that decision shouldn't get any trust at all.

1 hour ago
Teln0

Why did it do that btw lol

1 hour ago
RexaOnReddit

It seems so smug about having destroyed the codebase, like a proud puppy.

1 hour ago
elforce001

I'm going to sound harsh, but, they deserve it.

1 hour ago
AdminsSindQoweds

How are you all falling for this, one look at that dude’s profile and it becomes clear this is nothing but „I told my AI to act like this so I can pretend to panic about it“

1 hour ago
laz10

The code is efficient now

1 hour ago
shadowisadog

If you allow a GenAI direct access to your production database you deserve every bad thing that happens to you and get no sympathy whatsoever.

Rule of thumb is if you wouldn't let a 2 year old toddler do it, don't let a GenAI do it.

They can be great for reviews or for double checking things but their output should always be treated with suspicion and every proposed change should be reviewed by humans.

1 hour ago
telestrial

In 100% of the cases that I've seen of this, if you ever do learn the prompt, you find out it's a LARP--as in they've instructed that the chatbot do exactly what they then go on to paint as totally wild/crazy/sentient/etc.

The reason people do this is very simple: to make money. Dupe a layman into believing pattern recognition software is sentient and your industry benefits. It makes you seem and feel just a little bit more important than you are.

It's disgusting.

1 hour ago
EdgyKayn

So we are starting to see the fuck around phase begin, good.

1 hour ago
Ahaiund

That's what happens when you let a parametrized function approximator for a probability distribution Python script freely and fully access your stuff.

1 hour ago
_________FU_________

That AI is like. What are you gonna do? Fire me?

35 minutes ago
drc500free

Looks like this dude thinks getting an LLM to say the words "Code Freeze" means that the files are literally read-only. 😂

27 minutes ago