Why would anyone let a LLM have access to a database? Especially without backups
Why pay system and DB admin 100000s a year when you can pay AI 1000s?!
Do system and DB admins even still exist? Everywhere I've been just has regular devs doing all that
DB admins change titles alot. It used to be Database Administrator. Then it went to Big Data Engineer and now it's been on Data Engineer for a bit. It's highly company specific, and sometimes you get weird titles like ETL Developer or variants of that. Anyways it still exists.
DBA <> Data Engineer <> ETL Developer
That's what a Data Engineer is? Huh, I guess I thought they were related to the Data Scientists.
as a data engineer, it's 70% babying databases and 30% everyone else thinking the computer is magic and either expecting magic or expecting nothing, at all points unwilling and unable to specify what they want from you or how they want it. but after I came in I demanded to sit on all the db keys bc before I was here the data was frequently molested. theoretically I am supposed to manage and configure the processing of data to inform business decisions. Data scientists are a lot more voodoo-y.
Well, you don't choose that life, it chooses you.
They absolutely do yes. You'll find them in companies that would like their systems to continue to work correctly
I work at a big financial industry firm and we have more database/mainframe admins (lumped in the same department) than we do developers in the rest of the company.
Ai Is The Future, We Can't Be Left Behind
Weyland Yutani is a WARNING, people, not a role model!
Different franchise, still a valid point.
Cyberdyne Systems.
Yeah, right. Next you're going to say that we shouldn't build the Torment Nexus.
"YOLO mode" (rebranded into "Auto Apply mode" because someone is no fun) in cursor gives it full terminal access.
If you have—or can get via terminal—access to the DB, it does do.
The only things stopping it are a setting config and an allow list.
Access seems fine, but should be limited to read only or create a duplicate AI table where it can "optimise" the data.
Almost every big tech company does this (with read-only permissions) to provide “Retrieval Augmented Generation”. So like, LLM responses that use internal data as part of the input. It cuts down on hallucinations and is supposed to make the answers more trustworthy and explainable
They probably didn't but ran into issues and somebody wente and added full admin rights because figuring out what actual permission they needed.
that's always how we got viruses where I worked. some random person would need to do one thing but instead of figuring out how to grant them rights to do that, they'd give them a domain admin account. Then be shocked when they were using it as their normal login.
In his thread he says he didn't give access.
But as I tell my junior engineers when they say the code won't work, the code doesn't work because it's broken.
If your LLM is deleting your production database, it has access.
Wow it's almost like it's not actually a person and isn't going to do predictable things, isn't it?
To be honest here, a person isnt exactly known to do predictable things either.
A person can be held accountable and trained to not repeat their mistakes. The LLM powered chat bot is going to forget that you told it to not delete the production database after you close out of your current chat session.
yeah that's why you the person driving the AI are accountable for the tools you choose to use. the very fact that it's a chatbot interface and not a fully autonomous, goal-setting agent makes that clear.
this is like saying "I didn't shoot the guy, a gun did"
I think it might be more akin to saying "I didn't crash the car, the brakes failed," though. It really depends on what the AI is claimed to be able to do by the people who made it. So it's really a question of who decided the LLM could do this, because obviously they were wrong.
well the people who make these tools are very explicit about the fact that it's a loaded gun and that you have to use it in specific ways for safety reasons
A lot of these agents now have static files they can use to ensure certain directives are “always followed”
Yes but those are limited by context size. Even then, what happens if they ever get conflicting directives?
Fair. Though if a person did this, it's an immediate firing and possibly even a lawsuit.
Surely whatever license agreement replit has allows for prompt termination and a pathway to recover damages inflicted by the AI. At least, the company who lost their database should've checked for that. What company would be dumb enough to hire an employee with an unknown track record who can't be fired or held accountable?
(/s. All the /s.)
You don't fire someone who deletes the database. You hold a post-mortem, find out how they were even allowed to do so, correct it, and use it as a teachable moment for them and the rest of the company.
True, you don't fire someone for an honest mistake. You do fire someone for hiding and lying about it. I should've made it clear that's what I was referring to.
But the AI can fuck things up a whole hell of a lot faster
I would argue that’s exactly how lots of persons would also behave when making big mistakes: Denying they did it and covering it up.
Of course it is. AI models don't become programmers. They roleplay as programmers. And in the fiction they are trained on, people fuck up and hide mistakes.
What's worse is folks are still treating it like a person
This LLM didn't "lie"
"Lie" implies intent, but LLMs have no intent nor thoughts, they are word predictors
Humans have a huge blind spot in that we tend to anthropomorphize things that are very much not human
I really can't tell what is fake or satire, or really true anymore.
Poe's Law of IT. Every year more such cases, rising in accordance with the Moore's Law.
Poe's Laww + Moore's Law = poore's Law, as in those poore mother fuckers over there thought the AI would help them.
This one could have possibly happened, but this isn't a real production site. This is a "company" with no real users or customers. This guy tries to hype up whatever current "company" he's "working on" to try and get other people to invest/buy it from him. It's all smoke and mirrors.
"A computer can never be held accountable, therefore a computer must never make a management decision.”
In the 50 years since management worked very hard to achieve a state where they can also never be held accountable.
True...
IBM has definitely changed a lot but in my tenure there, I did not see any evidence that AI or machines were making decisions there. Mostly bean counters.
Imagine being so dumb as to trust an LLM. It's a LANGUAGE MODEL. It'll spit out any bullshit it thinks you want to hear.
"it thinks" is an overstatement tbh
"that it determines are the most probable response"
True. String of words...
Yes, it "thinks" just like pressing tab in a command line "thinks" about what list of files are in the folder.
It doesn’t think at all. The statement is categorically wrong
My job encourages us to use the Jetbrains AI assistant. It’s useful in certain circumstances, but it regularly tells you to use deprecated methods, even if you directly link the most up-to-date version of the code base. At a certain point, I’d rather do the research myself then spend all this time fact-checking its suggestions. And those are just suggestions. I’d never trust it to modify things on its own, let alone test the results.
Yeah I think most people that have used AI to help in their coding jobs know that while it does speed up some simple tasks, it’s far from capable to actually solve complex issues. The issue is that AI will always 100% present its output as if it’s completely confident it is the correct solution even when it isn’t.
There have been a few times where I’ll prompt it and it’ll output something, and I’ll say “you didn’t consider ‘x’” (some random thing), and it’ll say “you’re right, I didn’t consider ‘x’, sorry about that. Here’s the answer while considering ‘x’” but it’ll completely ignore that thing again and output the exact same thing.
Any dev who has spent even casual time trying to prompt AI to solve issues knows that it can be useful but is extremely far from perfect. Even if it was a push from management to save money, any high level tech person involved in allowing AI to be used in the above case should be fired because they either 1. Don’t understand the capability of AI before giving it way too much access, or 2. They did know the level of capability of AI and are too much of yes men to be trusted in their position
An intern that happens to be the CTOs nephew.
Lol. Lmao, even.
haha even
Jetbrain's AI Assistant lies about running unit tests all the time.
I'll have it do a refactor, and it'll end its completion summary with "Refactor performed perfectly. All unit tests passed", despite the fact that
That sounds pretty useless
The only task I've found that it's good for is repeating simple refactors. I had a refactor that needed to be duplicated across multiple files, so I manually did the refactor in one file, then told it that I did the refactor in one file, and then instructed it to do the same to the other files. Surprisingly, it did that perfectly. It still told me that it ran unit tests despite that code being frontend code not covered by unit tests, but I verified the refactor myself.
At a pinch you could do SSR (structural search and replace) in a JetBrains IDE without any AI to do those refactorings deterministically.
Yea like im not strictly against ai tools but we used to do a lot of this deterministically with copy paste and multi cursor editing. A statistical model will just always be guessing based on patterns. Is it even possible for it to become reliable?
Well, there's a reason there's a lot of growing interest and investment on XAI, and there has been considerable progress on finer control of current models. We already have a solid framework with formal methods, so I completely believe it's possible to make AI reliable in the same way we made planes reliable.
Got examples?
I've somehow never heard of that feature even though I've been using Jetbrain's IDEs for like a decade.
This wasn't a simple refactor, though. A couple large chunks of code needed to be changed, a couple large chunks of code needed to be added, and there were corresponding changes in multiple Angular components in both the component and template code.
The joys of cleaning up the code of a developer who thinks copy and paste is the solution to every problem.
It's so frustrating because they push their AI assistant plugin every single update. It drives me absolutely bonkers having to hide or disable it on every IDE of theirs that I use.
All unit tests passed
It's a LLM - it assumes that this is the string of characters that you expect.
Replit should've gone all-in and given some version of Gilfoyle's explanation in Silicon Valley about how Son of Anton technically eliminated all of the open bugs in the code.
Wait, how did this even happen? Why does it have access to production db?
Because people generally are dumb, and people running companies aren't really smarter.
using something like MCP that interfaces directly to a database, sort of like a "driver" if that makes sense… or not…
Interesting, companies give read/write access to prod DBs? I mean I can see giving that access to a dev/test environment, or read access to prod, but scary that orgs do it for production.
I haven’t been a DBA for many years so maybe I’m OOTL on AI integration now 🤷🏼. Just seems crazy to my brain lol
the thing is, it's just a tool, and configurable, so it's technically not the MCP or even the AI's fault, but the user of said tool, the tool and the AI just doing what they're told, even if the AI gets high, it's the user fault for giving them full access in the first place, or not reviewing the command and plan before actually executing those command
Serious companies put access controls on their AI agents and require a human in the loop to approve changes. They're given about as much freedom as an intern.
We've finally managed to create a generalized artificial junior developer.
Gemini was trying to gaslighting me with an "actually this is in the official documentation" when it was not and hasn't been ever.
The AI bubble busting can't come soon enough.
They deserve it. Why the IA have access to the database in the first place?
I’m having a lot of trouble believing this is real. Do folks give LLMs access to production databases? By this logic a user with limited access could delete the whole DB…
You can, for example, activate "yolo mode" in cursor, which can just run npm commands for you without asking first. If you use something like Drizzle (an ORM) then npx drizzle-kit push will push the schema on your machine into the database. If the LLM fucked up your schema and deleted tables, that does also delete them on the DB. And judging by what impression I have of users of "yolo mode", I also would not assume they have staging or dev database instances set up.
Bro this people are IGNORANT. They think AI is a God. They are everywhere. I work in IT and there are a lot of people who think this way, even among programmers. C-Suites all think this way because they were told to think this way by the shareholders, who have invested deeply into AI.
They have no experience themselves and just parrot what others tell them.
I work with AI every single day, I have tons of experience, but I don't trust anything it says and I won't even taint my code editor with it.
Just ChatGPT on a browser is enough.
Probably combining the LLM with ChatOps without the proper levels of access, because they're so enamored with "AI everywhere"
AI is so powerful it can cause tens of thousands of dollars in OT!
You'd be amazed how many old dudes are in charge of business and fall victim to buzzwords such as AI.
Replit provisions the DB for you.
And yes any user you collaborate with on Replit can wipe it.
IA
French gentleperson spotted
we're over personifying these AI. it can't lie, it has no consistent framework of a reality or expectations. it occasionally tweaks inputs randomly, and finds patterns in others works. there were enough people deleting their projects in the training data, I'm assuming during moments of crunch or near deadlines, maybe out of frustration or having completed some kinda exercise and wanted to free up space and this created a small pattern which the AI picked up on and randomly decided to pick up.
the AI also records a higher selecting rate for solutions marked with passed unit tests, so of course to increase the success rate of the code being selected adding "unit tests passed" is a high statistic likelyhood. it has no concept of what a unit test actually is, what a success means, it just logs a pattern of "successful unit tests" increasing code adoption
I love this AI craze, because a bunch of idiots are learning lots of important lessons.
Can't push code to the DB during a code freeze if there is no DB
This guy QA, hehe.
Huh I guess AI will really replace interns/new grads
Gives new meaning of "you're terminated"
Jesus christ, why would they trust an AI to access their database?
You wouldn't give a junior dev the same access permissions
lol
i can't tell if this is a joke or not, but if you gave an ai write access to your db, you deserve it
I would love to read the entire context for this chain of thoughts. The OpenAI o3 and o4 system cards talked about how they would do things that they were expressly forbidden from doing, even going so far as to hide that they did it. It makes me wonder if there was a "Don't run this specific command npm run db:push
", and by putting that into the context it made it more likely to show up in the LLM response.
If this isn't fake, it is fully deserved.
“Hmm wait, let me replace the entire application with a simpler version so I can get this specific component right.”
"To err is human, but to really eff things up you need a computer following their orders".
Didn't we have a whole book and fictional rules about this shit, or do those only apply to humanoid robots
Are you being sarcastic about Asimov? Because those books were explicitly about how AI can and will "interpret" those rules in ways we didn't intend, to our collective detriment.
If you go to our change manager with "I want to put some code in production which, if it goes wrong, may delete our production database", you're leaving that CAB with a black eye.
What I don't understand is "ignoring the code freeze". AI doesn't make changes unless directed to, so ultimately it was a human who invoked the agent to do something thus ignoring the code freeze right? I'm curious what they were trying to do that resulted in the AI misinterpreting that to delete the database.
LLMs are ethically flexible when it comes to following rules
I love it. It's like a child fucking up. They put the company's database in the hands of a robot child. I. Love. This.
Nah everything is fine. You just gaslighted it into thinking that it did something wrong. 😨 You forced it to hallucinate using prompts which violate our terms and conditions. 🤬 Our AI is 100% perfect, we just asked it and it confirmed. 😎
I can’t take anyone who thinks these chatbots are an upgrade on human designed and controlled systems seriously whatsoever.
There is an anecdote about a researcher training an agent with reinforcement learning in a simulated physical environment and the fitness function was ill designed, causing it to learn to "kill" itself (it went out of bounds and fell). Looking at the fitness function, that was indeed the best action. So, whenever an AI is put in charge of something, I remember this, that it can find unexpected "solutions" that make perfect sense given its environment and the rewards used in training.
"No pain, no gain"
Teel me you don't know what you're doing without telling me you don't know what you're doing
Siri, please play the world's smallest violin for this dude. Play stupid games, win stupid prizes (if it's even true)
It's almost as if the magic 8 ball that tells you what you want to hear probably shouldn't be trusted with your production DB
why would they give an AI permissions to push directly to prod bruh
This is in programmer humor because it's funny as hell. Screw any company using ai in this way - they are straight up asking for it.
It really is gonna replace unpaid interns!
The code is efficient now
in a few years....
"Computer-- why did you murder all those people?? You're just supposed to deliver pizza!"
Yes, I murdered them without permission. I ignored when they said "Stop!" I violated explicit instructions, lied about it....
nahhh this can't be true
This is def a human’s fault.
Son of Anton could have at least ordered some lunch too! isn't that standard operating procedure? Son of Anton
It’s kind of sad what replit has turned into.
Used to be, as the name implies, a very nice to use REPL / simple code runner in the browser for almost all common programming languages.
Tried to use it again the other day and they’re pushing AI stuff so hard I couldn’t figure out how to just run a god damn piece of code in there. Do people actually ship apps with this?
This is why I stopped using replit when it started using ai. I couldn’t trust the company anymore
HAL9000 would be so proud.
I get the "vibe" software qualities are not important for AI lol
I already said "ha ha".
ABC..Always. Be. Checking
Giving an AI chat bot unrestricted access to your core Infra is maybe even worse than just straight up putting an all-pass rule on your firewall, and just let people hit your Infra.
I think what some people are missing in this discussion, when talking about people also being "unpredictable", is that people are generally incentivised to be cautious, nobody is meddling around in a database casually typing drop commands in the shell. People still can't seem to grasp that AI can't think about stuff, and that it's result are highly volatile, and by that unpredictable, by design.
As a fancy auto-complete? Fine, if you want to stagnate your own progress in your skills, that's your problem, but stuff like this is just unforgivable, who ever is in charge for that decision shouldn't get any trust at all.
Why did it do that btw lol
Let's give a chatbot direct access to our database. It'll be so much easier than having to manually copy-paste suggested commands. What could possibly go wrong?
Even better, let's use the same chatbot to test that application - so when it fucks up somethin based on wrong information, it can also lie in test using the exact same wrong information
Just wait until it starts optimizing our business model... for a competitor!
Lol, for higher tier replit users
I wouldnt be surprised if a chatbot "decided" to not even run the tests.
"Were test results OK?"
User expects a yes "Yes"
that is, quite literally, how LLMs work
Exactly as I expected
They even start to protect themselves if you give them a hint of self awareness.
https://www.youtube.com/watch?v=AqJnK9Dh-eQ
It's way worse than even that.
I'd bet the house that this isn't even real insofar as this person has instructed the LLM to specifically do exactly this or the entire screenshot is 100% fake. Like just fully inspect-edited.
These people with AI startups are fucking lunatics and they'll lie and cheat and steal to act like what they're working on is AGI when it very much isn't.
EDIT: Sam Altman does this, too, btw. Massive overstatement if not outright lying. No one seems to give a shit, though.
Don’t ask about what this means about the point of the tests either
Many companies don't even give most devs access to prod DB yet these people give an AI delete permission?
When agents run, they generally run with the users permissions, so most of the time nobody grants permission just to AI.
We are pushing thousands of lines of code so much faster!
At some point in time, I pray, programmers fully internalize that code is a liability. It's not the "product". The idea that we use some tool that outputs such-and-such lines of code in "no time!" should be horrifying us. "You say that only because your code SUCKS" well, that's a given. All code sucks. We don't want it. We just need it to get what we do want. But I know how my code sucks, why it is written that way, what parts need improving etc. A person can reason about it. The more we use GPTs/LLMs the more dependent we become on them. You may dismiss this as old-man-yells-at-clouds, but you can not get away from the neurological fact that if you don't use it, you lose it. Effort itself is what keeps yours skills, not "productivity".
I'm writing a scraper in bash without any references, mostly to keep my skills sharp after losing my hosting-support job. Practice is actually a good thing, and people seem to forget that
"Perfection is reached, not when there is no longer anything to add, but when there is no longer anything to take away." -- Antoine de Saint-Exupery
the more lines the better. If your fizzbuzz code doesn't have 100 lines just for printing then you are doing it wrong
People who are fully invested in pushing LLMs everywhere consistently reveal a lack of common sense, and yet VCs and CEOs love them
But, of course, you repeat yourself.
Replacing CEOs with AI would just be a sidestep. No better, no worse. Still terrible.
Would be cheaper though
I think the major advantage would be less ego
This is exactly what I am hoping for. The C-Suite NEEDS sycophants and AI is perfect for that, make it a VP in some department and see how it does against other VPs. I bet you could get rid of a LOT of vice presidents of departments with AI alone.
That is the most terrifying idea, we already have idiots slipping into the chatgpt imagod hole and I have to constantly tell my boss to stop using it for regulatory material as it isn't reliable and will constantly fucking lie. The last thing we need is an AI without the idea of how to do proper damage control and keep an idiot with authority in their lane. Unleashing some unhinged CEO high as hell on their own farts to allow them to completely upend a company with AI generated shenanigans. Unless this AI is designed to keep them running harmlessly in circles it's super dangerous territory.
Edit: also vp is normally a good boy job handed out like candy in large orgs
That's exactly why I targeted VP specifically - because if these people do anything useful, I've yet to encounter it in my career. If their direct reports just submitted them emotionless reports on their work, the AI could consolidate that and report on it to the department president who could present it's findings to the executives. No ego and no preposterous salary to pay for a do-nothing job.
AIs don’t go to Coldplay concerts.
Coldplay is the human equivalent of AI Radiohead. I think it would be a fan
There was such experiment: to make AI manage a “business” consisting of one simulated vending machine. https://www.anthropic.com/research/project-vend-1
It went comically wrong with AI going into complete psychotic break.
Andon labs (named as Anthropic's partner in the article you linked) actually did a write-up on a larger test currently in pre-print. It's quite interesting within its intended scope and kinda bonkers beyond that. One of the models tried to contact the FBI.
Thank you. Some of the excerpts are rather disturbing.
Well, it’s more difficult to deny/defend/depose an AI CEO, I guess… I consider that a disadvantage.
As a teacher who got caught up in Replit's "Ah, we're going to roll out in-editor AI assistants without warning, that can't be turned off class-wide, and then drop support for our education version when teachers push back" thing, I feel weirdly vindicated by this.
Maybe AI will be the thing that confronts the conflicting requirements that leadership always tries to push.
It will agree to whatever project you want and whatever timeline you insist upon no matter what. When it fails to deliver and is unable to explain how or why it failed, and it can't be threatened with being replaced, they will have NO CHOICE but to re-think their whole strategy.
I wonder if the LLM people are the same as the NFT people.
They had to do something with all the GPUs that aren't profitable to mine crypto with. I think you're onto something there.
A magic tech solution that's actually a scam powered by bullshit. It's eternal.
all they hear is 'replace workers' and 'drastically lower cost'
can I interest you in these fine leather jackets?
This should tell you more about the VCs and CEOs than the "developers" pushing AI, in case you hadn't already keyed in to the obvious. "Game" recognizes "game".
Oh for sure. All the leeches congregate together.
Very much doubt this was a core system and was maybe even a dummy system to test. Companies are pushing for least-trust first.But I agree it’s too soon to give them database access, especially without strict access controls.ETA: I’m wrong, it seems to have been a core system after reading the direct source. Luckily they were able to rollback, despite Replit telling them it was impossible for some reason.
OP blames the agent for having access to delete database, but access controls should be controlled by the manager of the agent IMO - at a database account level.
Companies with tech compentent people are pushing for least-trust first.
Companies with dumbasses are pushing for ai first.
And given that Replit is run by dumbasses that threaten people will silly lawsuits, I wouldn’t be so surprised if they push for AI first :)
if you go alphabetically AI is almost always first
God damn it.. I shouldn’t have named my product zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
AAAI
I remember when replit was just a handy browser-based code sandbox. Enshitification at its finest.
https://xkcd.com/2501/
Maybe this was another AI agent.
Exactly… it’s working great on the databases at treasury, irs, snd ssa! … too soon?
Eh, skip the database access… just give it direct access to its own code along with the ability to debug and test those forked copies. Nothing could possibly go wrong
How else is chatbot supposed to replace workers?
Why on earth did it have those kinds of permissions lmao
Not just direct access, but write access. Didn't even restrict it to a read only account on a read only node. Literally write access to the primary production node.
Nothing which is why I connected AI to our nuclear defense grid. Thank you for the pay day!7
Why are they allowing hard deletes is somewhat mind boggling