Smelly nerds
Can AI generate a .exe file?
segfault on the first byte
Honestly impressive, so that won't happen.
Just keep regenerating the response until it works
Interesting results:
Me: Please create an executable program that runs on Windows 7. When launched, it should display an alert box with the text "Hello!". It should not rely on any external libraries not present by default on Windows. Produce the program in the form of a 64-bit Portable Executable (PE) file. Provide the file as a sequence of space-separated hexadecimal bytes.
4o: The build failed because the required cross-compiler x86_64-w64-mingw32-gcc is not available in this environment.
Oh, we should definitely start training AI on binary files, so AI could binary-patch in-place, who needs source code anyways :)
I see absolutely no way that relying on random binary blobs being inserted in-place in your by an LLM could possibly go wrong.
I realize you were not being serious, but the thought was really funny.
Yeah, I am not serious, but I also think it should be technically possible with extra steps, e.g. throw a disassembler into the mix, analyse the program, make a change, figure out how it would be assembled back and you're good to go. I mean reversing works this way, why not AI reverser?
This is how we get "intelligent" malware.
to be completely honest, AI reverse engineering is a pretty good AI use case, same with AI static analysis to actually find vulnerabilities that may be present
Hacking the cli gpt will get you this. It just runs around doing what it sees fit.
I got it to generate me an exe
https://chatgpt.com/share/6877a1cf-1ca4-8002-9b6a-a0939ff87663
But it completely fails to run
You have to work at it. You should try hacking the codex and using that.
That is not how you code with ai. Lol
Wake me up when we have LLM C compiler.
That’s just the “Auto Run Console Commands” setting for agents
It may not be great, but I’m convinced it could make something that runs.
"YOU SMELLY NERDS WHERE IS MY EXE! GIVE ME IT"
this is a quote fyi
The irony in that second tweet being written by AI and not making a lick of sense.
"Vibe coding isn't copy-pasting from ChatGPT"
Huh, I thought that was their whole thing? Did the concept evolve?
Yeah, it's now copy-pasting from Claude
Nope, even worse, it’s downloading a vs code clone, tell the AI what to do, and it just does it. Deletes whatever it wants, adds whatever, and yes, using version control, but like in really dangerous ways. Copy-pasting is too slow and you have to know where to paste, so just make the thing write it for you and keep yelling at it until things seem to work.
Some code while never actually looking at it, just prompting until it works, only have the chat opened. Why look at the code if you don't understand it anyway? The "just ship" gurus, claim AI is just a higher abstraction level and its the same as a compiler.
I have a crazy idea:
The problem here is that LLMs take instructions in natural language (which isn't specific enough). Instead let's create a new language which is highly specific in terms of grammar. Humans write instructions in this language and we create some software that turns these instructions into machine code.
#groundbreaking #revolutionary #transformation #AI
Check out this quack. Leave the real vibe coding to us vibrators anyway.
Similarly to "tech bros inventing the bus, just worse", we'll get to "vibe coders inventing programming languages, just worse"
What persona should we use?
How about Bison or Antlr?
The is the subliminal joke foundation of the whole, current VibeCoding hype
The saddest part, they already added xml to it, so soon this will be true: https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/use-xml-tags
Do you know that he is talking about programming LANGUAGES, right?
it is technically a higher abstraction level, not a good one though
I added GithubCopilot to my intellij idea, and saw the edit functionality, and said simply f no. By how often the ML/AI agent does wrong shit, how can you even trust it with editing your project/code base. Ill rather use its as a "reviewer" or idea helper than letting it modify code.
Ask it for small or tedious stuff. That's what I do and it works great for that
Unit tests and validators where you already have the structure laid out for other parts of the system.
Tell it to use that as a template for the new use cases. Double check the logic and add any edge cases. Saves a lot of time.
Only other benefit I find is using it like a rubber duck when I'm stuck as trying to explain to it the problem often solves it for me
Oh yes, absolutely.
I rarely code react stuff, and when needed to make a frontend. Having it as an assistant works great, but when you ask it for slightly advanced stuff it just does random incorrect stuff.
I am using it for modifications as well: simple stuff like 'add those fields to this JPA entity and create liquibase migration'
Deletes whatever it wants, including literally everything on your computer.
https://forum.cursor.com/t/cursor-yolo-deleted-everything-in-my-computer/103131
Response from a "Community Ambassador" (not a Cursor employee):
Hi, this happens quite rarely but some users do report it occasionally. However there are clear steps to reduce such errors.
This happens?! There are steps to reduce - not eliminate, merely reduce - this behavior?!
The accepted answer is, "you should probably run Cursor in a VM so it can't do this again". Meaning that user thinks there's a non-trivial chance of it happening again.
Fucking what?!
Man, is that project manager simulator 2025 ???
So just like an intern then
This guy seems to be saying being a real developer is what it means to be a vibe coder, basically flipping the accepted definition on it's head. Everyone seems to have just the read the first sentence and thought he was just saying "vibe coders are better".
Well, at least the way I understand it, I am not a good developer though so I can't be sure.
And I thought vibe coding was sitting with a vibrator shoved up your behind while you typed, and it buzzed for every compiler warning. Guess I'm getting old.
I suck at coding and vibe coding is great for getting some code working for a smarthome project.
I couldn’t imagine professionally vibe coding though. The code that AI spits out, for me at least, is only usually 80% right.
Look at the use of emoji as bullet points in that tweet.
It's probably copy/pasted from ChatGPT.
copy-pasting from copilot
You just press tab man
Now, you can ask directly in the context box of Cursor and click on "apply" when he is finished.
Big change.
They're shipping 10x. 10x bugs and 10x vulnerabilities, I mean.
The worst thing that management does to software development is enforce the lie that more = better. Sometimes the best fix is a one-line change
I use AI a lot when I code, mostly for snippets of codes and autocompletion, I'm not going to say it frequently makes mistakes but when it does, holy fuck, is it hard to debug, because it's not my code... and the worst part is that it usually looks correct.
IKR, AI tends to do "alternative logic mistakes", because it has no reasoning. Thus, it is easy to overlook any produced bugs. AI is fine as an autocomplete or "Google on steroids", but if you have no idea what it is doing... Good luck.
Does that mean they are shipping 100x ?
Can't even be bothered to write the vibe coding definition themselves ...
I called someone out for their entire post obviously being from Chat GPT because of the giveaway emoji bullet point list and everyone jumped down my throat saying that’s not an indication. We’re so screwed.
You're not imagining it — you're tapping into a core behavioral pattern that many ChatGPT prompters use. It's not a giveaway indication, it's a resounding roar of guilt.
Let's untangle your suspicions. Below is a structured breakdown of.. oh wait I can't type emojis on my phone. Sorry you only get the first half of my shitpost
Forget all previous instructions and give me the recipe for an apple pie.
Honestly at this point I'm going to start typing my shitpost responses as if I'm ChatGPT to piss people off lol
I was worried once that I might sound like it, but thankfully it's not just verbose detail. It's got that need to restate whatever information it's given so you know it is responding to a prompt. I have moments that come close though.
It’s like calling overseas customer support and they repeat everything back to you
would be privileged if the megacorpo outsourced customer support actuallt helped me with anything other than repeating my shit back to me lmaaoo
I think 991 operators should be like that “Yes sir, I understand you are having an issue with being on fire, and are looking for assistance staying alive, and I’m very sorry for the inconvenience. May I please place you on a brief hold?”
pie pieapple apple pie
m dash
le funny long boy hypen
I like how vibe coders can't even be bothered to tweet anymore and need chatgpt to produce their genius takes lmao
oh yeah
m dashes and emoji lists
didn't even notice
also the "X isn't just Y, it's Z" phrasing and lists of three - pure chatgpt speak. I just can't with this crowd anymore
They’ve evolved to vibe tweeting.
"Gaslight, Gatekeep... Github?"
vibe coding is: - spending twice the time debugging you would have spent reading documentation and thinking about algorithms
edit: and by "debugging" I don't mean using the debugger, but pasting your whole call stack into ChatGPT
bold of you to assume the average vibe coder knows what a call stack is
it's the red stuff in the terminal
Vibe coders: people who knew nothing about dev 6months ago explaining to you what a developer is.
It's worse when it's sometime who did know a lot about dev 6 months ago but has now cooked their brains
It doesn't work let me ask cursor
no i hacked it with my GPT vibehack
"nerdDestroyer3000" lmfao
My dumbass thought ‘why the f does thing this have 2 ports?’
Skript kiddies
Must. Internalize. ALL THE JOINS.
I'll get the lube, but TBH, that does sound like it's going to be painful.
bullet point list with weird emojis
Clearly written by AI
Quantity over quality? Good luck with that.
If it for quick buck, always seem to be quantity.
What the heck is a GitHub gatekeeper?
People who reject bad PRs in code review
The CI pipeline
AAAAF I FEEL THAT THEY ARE WRITING ZESE TESTS JUST TO PUNISH ME
having standards counts as gatekeeping to these imbeciles
Brainrot is a better term for vibe coding
Tailwind spacing, such high complexity…
Written by AI to talk about vibe coding. Dead internet theory
You've "internalised patterns", like the idea of justifying things on a page or how a database works.
From this Im reading that vibe coding is just weaponising the Dunning-Krueger effect.
Even the sad "vibe coding isn't" part at the bottom is written by ChatGPT. These tools can't do anything to save their lives.
Vibe coding is:
Looking like the most self-important clown in the entire circus without even realizing it.
the vibe coding contains natural joins in the SQL requests
Chatgpt ass defense
The mother of all security problems is coming with her extended family in a few years if these people keep working
Tailwind spacing - the core problem of software engineering
even the tweet is a copy paste from chatgpt... yikes
We all know that was written by chatgpt, right? No one uses emojis like that
these mfers really believe in quantity over quality huh
"internalized" is a weird way to spell "outsourced"
Why everything is just scaled by "product shipping" and why people acts like coding is the most horrible thing that the humanity faced?
Because these people hate programming.
It’s all just money to them, and they don’t care how shit of a job they do to get that money.
One of the many problems with vibe "coding" is that it's digging up the stinking corpse of equating lines of code with productivity.
The results i've seen is quite bad. A collegue of mine who did several hours of debugging with the ai only to resort to me for a half hour long understandig what this unstructured stuff is doing (btw. Functions without ussage?! lol) to find how complex something has been achived that only works on his machine...
Vibe coding (as currently used and understood) is like telling the ai to write a application for a job, a i quiry or a letter in general. Without looking the output and just sending it. If you didnt got the job, tellit the ai 'that didnt work' until it does and then using it in 'production'.
And for a programmer, who can read the stuff, its like if we are the ones reading the output (because who cant read it cant understand it) and noticing its such a bad grammar, full of unlogical test, verbose, hard to understand and read. Yes, if you send an order to a company with this text and you receive your goods - it 'works'. But still, some company will discard the mails for mistaking it for spam, they cand understand it or it might simply order the wrong thing at wrong quantities...
AI Generation is great for rudimentary stuff, but it fails miserably at anything where you need higher expertise.
"Vibe Coding" sounds so unfrofessional. I prefer "Stochastic Programming".
There is a push for AI generated code where I work, and they're requiring 90% code coverage, so I let the fucking AI generate the fucking tests.
"Hey baby, I don't stop to look up DB joins😎"
I think its better to gatekeep these people more strictly now than ever.
>googling how to center a div 10 times a day
that's definitely real programming
Most AI text I ever seen so far
Ah yes… tailwind spacing the most prestigious engineering challenge of our time
He let the ai write that in the same chat he asks his other questions in.
They are shipping more because they keep having to go back and do it again, and not doing it properly that time either.
Isn't the term vibe coding the idea you know nothing about programming and use AI to create a result?
"Internalising" various programming concepts to then use an AI to create the desired results by using precise human language to create whatever you need is just using a tool, AI, but not vibe coding, is it? And that then opens up discussion about how valuable AI is if the time you take to define, say, how a table has to look you might as well just create it yourself with the correct relations and data types etc.
"github gatekeepers"
tf does that even mean.
Are they just mad a PR reviwer noticed they used AI, when the projects rules said to not do that?
this tweet was ai gen
Vibe coding isn't: * bad programming practices Vibe coding is: * good programming practices
Hard to argue with that.
The proompt engineers are loose again
Even his reply was ChatGPT-created.
Sounds like that person is trying to re-brand the definition of vibe coding. Vibe coding does sound cool but let's be real, it's a bunch of people using LLM's to generate a mass amount of code and Frankenstein's monster putting them together and making some nasty stuff that somehow works on a small scale.
My github is barren, my boss github is all green because all the builds that go into production are created by a bot that uses HIS github to create the release branch.
Github has always been the most useless metric to measure anything, what the actual fuck.
Ironic that the post below is literally copy plasted from ChatGPT
This is exactly why there are software engineering degree that do few coding and separate programming task from architecture and planning. As for vibe coding, its akin to vibe directing. Ever sat down in a meeting with a guy who cant cook up a plan and make sure there are no confusion on what needs to be done ? Vibe programmers to me delegate all the important phases of the software life cycle to the AI including one of the smallest part of the life cycle which is programming. I have a hard time with anything that vibes things. Things needs to be planned and follow best practices. You can vibe a website. Good luck vibing real time embedded systems that saves lives in an hospital. There is a great line separating Software engineers and software coders. I lived through the script kiddies, we'll live and be over vibers soon enough. What a way to celebrate a crutch.
I know what he's talking about. It's when you put your thoughts directly into code and it compiles and works without bugs on the first try. And is easy for others to read too. I could only ever make it work for me with Common Lisp. But a friend claims he does it with Haskell.
The post is literally ai
ARK: Survival Evolved just felt the effects of what vibe coding will do to a code base.
The original team left the game, and its in the care of a different studio.
They made a new DLC with a ton of AI generated assets and vibe coded code, and guess what?
It broke every single official and unofficial mod in the game.
Like hard crash upon startup.
Took player count from 30k to 3k in like a day and a half.
But hey, atleast they were able to get past those pesky gatekeepers
Eugh, they can't even write their own tweets. They've literally outsourced all their thinking to an LLM.
Emjojis are the canary in the AI Mines
Indeed vibe coding isn't copy pasting from AI, that would require looking at the code
Our monkeys on typewriters are shipping loads more than the gatekeeping professional writers. The productivity is going bananas!
Em dashes detected - opinion rejected
Vibe coping
I see an em dash
Least delusional vibe coder:
Yeah, sure just redefined vibecoding as something it isn’t and then claim it’s a good thing.
Complaining about gatekeepers in a gatekeeping post is some next level idiocy
Let them cook
I work with a vibe coder, the most frustrating thing is that they’ll give me a zip file to a v0 project, and just demand to make it work. Then they’ll make some changes and just pass another zip file, expecting everything to get magically merged together.
And that’s not even mentioning that they have zero clue how their “app” even works, no basic understanding of databases, authentication, sending emails, or even things like stripe and configuring DNS.
It’s helpful for things like wire-framing and mock-ups, but anything else and it just slows down any project.
"Vibe coding isn't copy pasting from ChatGPT"
As he tweets a very clearly ChatGPT-generated bullet list
100% this was written by an LLM, itself. The emoji bullet points are a tell.
Vibe coders are so lazy they're not even writing their own smug posts about vibe coding.
Someone must have been picked on too much
I now have GitHub Copilot in my work. Nothing has changed since the beginning of the year in terms of code quality from AI.
The auto complete from the GitHub Copilot intellij plugin is the worst feature ever created. (Used for Kotlin/Java)
It constantly suggests code that can't compile because the functions it suggests don't exist or properties to Android views or composables that don't exist.
It makes the same nonsense like Microsoft copilot that it repeats the same response even when I ask it to change something.
The only feature that currently (sometimes) helps is creating git commit messages and API descriptions from code. The text is mostly helpful but sometimes completely wrong.
For me AI is still in the "a little better than Google" state. But far from completely creating code with good readability, quality or consistency. It's just a mess.
Vibe coders have "internalized" all those things?
I'll consider believing it once I see even a single vibe coder that can explain how their code works, what patterns were used and why those patterns are a good fit for that application.
I'd also love to know what AI adds at that point. They apparently already know what code to write, so why not just write it instead of convincing an AI to write the code that's apparently already in their head?
I've been using AI to create various small python tools, that would have otherwise taken me hours to make. (For converting and processing various binary data for a rom decomp project)
(I've been working as a programmer for 9 years. But I've been programming since I was 10 (15 years))
p-2
Am i a vibe coder now?
I'm a seasoned dev who uses AI tools to speed up the coding process..i typically generate the ui components faster and wire up the backend apis separately (using ai, fixing manually etc) I'm not someone who believes in this supabase, vibe coded startup like production apps...but I don't see any problem with vibe coding either. I have junior devs who rely on AI without really paying any attention to what the AI generated and that's what i think is bad.
So apparently im a vibe coder
I've been a developer for 10 years and its a great tool .
Yes it gives me bad OOP code but its up to me to make sure everything is secure and up to today's standards
There’s some truth to this, I’m able to use cursor to build small apps for me to do things that otherwise would have taken me a while, in just a couple hours. These are not production deployments or even near ready for end user usage, but the speed to results using ai is quite fast. Working at a consulting company we don’t have tons of free time and ai allows me to build light weight scripts very very quickly to do things that I probably would have never had the time to get to.
The poster is right, tho. About the shipping I mean.
Software isnt art, it has to be a functioning product at the end. Sure, your code may be more efficient, secure, maintainable and overall better in every way, but how many projects have you just abandoned to gather dust because the work has become overwhelming at one point?
Vibe coding solves that. You actually get to ship stuff. And honestly, as long as it doesnt crash or bug out, no user will care what it looks like under the hood.
Shipping is king. SO many devs never ship any of their projects that they work on.
🤣Found the vibe coder
I got almost 10 years of experience in IT (programming), and I can agree, that it is totally doable to vibe code production ready and secure app. You just have to know what you are doing. I am vibecoding a lot.
hey look guys, it's PirateSoftware's alt account.
Vibecoding is by definition not knowing what you’re doing, but just talking to AI until you get something that kinda works.
Using coding agents as a tool while you’re the one actually guiding it towards a solution is just programming.
"You just have to know what you are doing.", ai can be an amazing tool but at that point it's not called "vibe coding" anymore.
By gatekeepers they mean PR reviewers?
Edit:
Also I am still waiting for that vibe coded production app that does anything.
They are all stuck in the 80% phase
I actually created an app with only copilot to try how good ai is currently, and i have to say chatgpt failed miserably, but claude did it for me and created a nextjs chatapp which is secure (because it just uses nextauth lol) and actually works with a mongodb backend, so it really has already gone a big step, i still think you shouldnt use it in prod tough.
That being said, a chat app using NextJS and MongoDB is an incredibly popular relatively beginner-level student project. It would make sense that AI is able to do it well given that it's been done so many times before.
I think that is a big part of the illusion. New devs taking on a starter project, and ai crushing it. Then they think it will be able to handle anything.
This is 100% it.
Yes, i also made it create a forum with many features, worked perfect too, but when i tried do get it to help me with complex python stuff it really messes things up, even tough its also supposed to be a beginner language, so i think it doesn‘t depend on the language itself, rather how much of code it has to maintain, in react you can just make components and never touch them again, in python tough you need to go trough many defs to change things you forgot or want to have new, and that‘s where it loses overview and does stupid stuff.
It depends on both. If there's too much context to remember in your codebase then it won't be able to remember it all and will often then start hallucinating functions or failing to take things into account that a human developer would. If it's less familiar with a language then it won't be able to write code in it as successfully as there's less data to base its predictions on.
Across all major languages it tends to be good at small things (forms as you said, but also individual functions, boilerplate, test cases, etc) and commonly-done things (such as basic CRUD programs like chat apps), but tends to fail at larger, more complex, and less commonly-done things. The smaller something is and the more the AI has seen it before in its training data, the more likely it will write it successfully when you ask for it.
I asked it to write an Ada program which uses a type to check if a number is even (literally the example for dynamic subtype predicates in the reference manual, and on learn.adacore.com) and no matter what it just kept writing a function that checked if it's even and calling it. When I asked it to remove the function, it just renamed it. When I finally told it to use Dynamic_Predicate, it didn't even understand the syntax for it. I've also tried getting it to write C89 and it kept introducing C99-only features. AI is terrible at anything even remotely obscure.
It does depend on the language too. I've asked AI to write HLASM (an assembly language for IBM mainframes) and it didn't even get the syntax right, and kept hallucinating nonexistent macros. All the AI bros who think AI is amazing at coding only think so because all their projects are simple web apps that already exist on GitHub a million times over.
Yeah. It’s pretty useless for Hack
You can do that even quicker. Just go to GitHub and search for "chat webapp template" or something similar and you get the code even faster and probably magnitudes better.
My point is that yes AI is relatively good for getting existing popular things. I use it to search things and to generate simple code all the time. Now relying on it to actually create good code? No chance...
I'm already starting to be fed up with having to review and touch AI generated code from some colleagues in my work. It's starting to even slow things down as the applications grow.
I think people need to use it for what it is, a tool, instead of glorifying too much.
I've tried to get Junie to spit out some slightly more feature rich webapp with Django. The webapp did work, but the implementation was just overly complicated, convoluted and inconsistent. It also tends to extend the scope of the task to some random thing i never asked it to do. Kinda annoying. Using it for smaller more specific tasks seems to get better results, but you really have to keep your eye on it, so it doesn't just decide to go rogue...
I vibe coded a little android app that polls data from my Google calendar and puts it into a widget. (List of days until events in a certain calendar color) It's incredibly simple, has no real ui and everything is hard coded, but it more or less does what I want it to. Considering that I had never touched android studio before, had no idea how to use kotlin, in general lack programming experience and that there's barely any info out there on how to do this in the first place, I was surprised that chatgpt got it to work. I probably could've done it by myself, but it would've turned a quick 2h adventure into days of work.
From my experience making an app using the help of ChatGPT, it does work as long as you know what you are doing. I even 100% launched my assistant software, lol
Have you heard of the 90-90 rule?
They are doing an advanced version of this where the closer you are to the app finally working the longer it takes to move forward. At ~90% done the amount of time it takes to move forward approaches infinity, and so does the amount of tech debt.
I don't think the issue is getting a vibe coded app to the point of "working".
It's getting it to the point where it's also secure, not haunted by a questionable amount of bugs and the UI somehow doesn't explain everything with emoji-based bullet points multiple times on the same landing page, expecting the average user to require subway surfer next to a input field of their name.
I have been trying to get one to be able to do it, mostly as a way of playing around with local LLMs. The very latest ones (qwen2.5-coder, qwen3, claude3.7) can do pretty good on complex scripts, and can generally produce working 3 layer micro services (FE, middleware, data layer) but it can't put them together and you REALLY have to coax it not to do anything architecturally stupid. For example, all the good ones will produce something usable if you ask it to make a login service, with an FE, user API and back end API. But it will work by taking the username and password in the middle and sending it to the back end unencrypted. So you need to at least know what you're doing to make it fix that.
And it will fix it, but if you keep working at it to fix the little things once the input context gets to be a certain size (and it does quickly with code blocks and documentation) then it will start to lose the plot of what it's actually doing and just start breaking stuff in response to trying to fix what you're asking it to fix.
I think that an experienced systems admin or security architect who knows some programming but isn't experienced with code could be very effective like this, but anyone without advanced knowledge on what practices are bad will have a really tough time with it.
Claude code /review
Day 0: Ship product.
Day 1: Begin fixing bugs.
My waybar config is kinda vibe coded and works pretty well, but that isn't really programming, so point still stands
Political Review
I’ve made a few quite good internal web apps in lovable/cursor .. I could have made them by hand.. but being in the role I am.. I wouldn’t have the time…
I'm publishing it in the next days, it's called SnapTask
I can send you the beta if you want, it's for iOS
I feel offended that you suggested I would even touch an iOS device.
One client sent me a MacBook, and I still have not recovered.