Years ago I interviewed at Rackspace. They did a data structures and algorithms type interview. One of the main questions they had was about designing a data structure for a distributed hash table, using C or equivalent, to be used as a cache and specifically addressing cache invalidation. After outlining the basic approach I stopped and said that I have used a system like that in several projects at my current and former jobs and I would use something like Redis, memcache, or even Postgres in one instance, and do a push to cache on write system rather than a cache server pulling values from the source of truth if it suspected it had stale data. They did not like that answer. I asked why and they said it’s because I’m not designing a data structure from scratch. I asked them if the job I am applying for involved creating cache servers from scratch and they said “of course not. We use Redis.” (It might have been memcache, I honestly don’t remember which datadore they liked). Needless to say, this wasn’t a fit for either of us. While I am perfectly capable of creating toy versions of these kinds of services, I still stand by using existing battle tested software over rolling your own.
If you worry about forgetting how to code, then code. You already don’t know how to code 99% of the system you are using to post this comment (Verilog, CPU microcode, GPU equivalents, probably MMU programming, CPU-specific assembly, and so on). You can get ahead of the competition by learning some of that tech. Or not. But technically all you need is a magnetized needle and a steady hand.
I was like well I'd probably just make a Set in Java and insert letters until it has a duplicate.
They didn't like that. So I was like well I guess I can make a double for-loop and check that way and they liked that ... It is weird how people like you to re-invent the wheel as opposed to just import one.
One is domain knowledge which is less important in the age Google search and StackOverflow (and even less so in the age of LLMs but I guess interview techniques haven't caught up yet).
The second is the ability to understand a nested for loop, and if a coder can't do that by the point they reach an interview, it can probably never be taught.
It could be argued that being able to think up using a set in this instance is also an important skill, and I agree. But nested for loops are foundational skills, if the interviewee has problems there it's a good thing to know about early.
It could also be argued that they should just say directly "solve this using loops" if that's what they want, and well, yeah.
My favorite way to interview people is to ask them about their work and personal projects and about what parts of those were tricky, why, and how they solved those challenges. This gets candidates to talk much more openly about what experience they have and we can discuss real world practical problems down to having them show pseudo code (or showing off their GitHub repos of specific projects) that can very efficiently tell you how they think. It’s the equivalent of “tell me about bridges you have designed” vs “here are popsicle sticks, design a toy bridge” approach.
The company itself also turned into a circle of hell so probably for the best.
Jokes aside, while I'm almost sure that the ability to code can be lost and regained just like training a muscle what I'm more worried is the rug pull and squeeze that is bound to happen sometime in the next 5 to 10 years unless LLMs go the way of Free Software GNU style. If the latter happens then LLMs for coding will be like calculators and such more or less and personally I don't know how more harmful that would be compared to the boost in productivity.
That said if the former becomes reality (and I hope not!) then we're in for some huge existential crises when people realize they can barely materialize the labour part of their jobs after doing the thinky part and the meetings part.
In time, even video and embodied training may be possible for amateurs, though that's difficult to contemplate today.
People into homelabs have been running AI tools on home servers for years.
And they're all too small and dumb to be useful for anything but the most basic tasks.
We definitely would not have Electron and that's a world I want to live in.
Things will probably continue in that general direction. And just like today, a small number of people who really know what they're doing keep everything humming so people can build on top of it. By importing 13 python libraries they don't completely understand, or having an AI build 75% of their CRUD app.
Your code should have tests the AI can use to test the code it wrote.
And thanks to MCP, you can literally point your LLM to the documentation of your preferred tool [1].
One of the skills I've developed is spinning (back) up on problems quickly while holding the overall in my head. I find with AI I'm just doing that even more often and I now have 5 direct reports (AI) added to the 6 teams of 8 I work with through managers and roadmaps.
Pretty much me with some IDEs and their code inspections and refactoring capabilities and run profile configurations (especially in your average enterprise Java codebase). Oh well.
With your leave, this is going up on my wall :)
I think the talent pipeline has contracted and/or will and overcorrect. But maybe the industry’s carrying capacity of devs has shrunk.
Which is a problem when exactly? When civilization collapses?
But when you get into dark corners, Opus remains useful or at the minimum not harmful, Sonnet (especially Claude Code) are really useful doing something commonly done in $MAINSTREAM_STACK, but will wreck your tree in some io_uring sorcery.
I would love to see what a system like Claude Code could cook up running continuously for weeks. But I imagine it would get stuck in some infinite recursive loop.
E.g. it wanted to build a data query language with temporal operations but completely forgot to keep historical data.
It currently lacks the ability to focus on the overall goal and prioritize sub-tasks accordingly and instead spirals into random side quests real quick.
edit:
You can technically hack the API key from the subscription, but that’s probably brittle.
Or is there some other meta I’m missing?
but I agree, at least the way I use AI tools, it'd be unfeasible to review the code using this method.
I assume they have very peaky demand, especially when Europe + N American office hours overlap (though I'm assuming b2b demand is higher than b2c). I'm also assuming Asian demand is significantly less than "the west", which I guess would be true given the reluctance to serve Chinese users (and Chinese users using locally hosted models?).
I know OpenAI and Anthropic have 'batch' pricing but that's slightly different as it's asynchronous and not well suited for a lot of code tasks. Think a more dynamic model for users would make a lot more sense - for example, a cheaper tier giving "Max" usage but you can only use it 8pm-6am Eastern time, otherwise you are on Pro limits.
You can even pause. I will public a CLI that is doing same base on same syntax. And it use github claude action yaml syntax: https://github.com/codingworkflow/claude-runner/blob/main/.g...
That being said it will not surprise me if subscribers actually are losing Claude money and only API is profitable.
If you’re asleep by then, you miss the chance to resume right when it resets. The script is just a workaround to automatically pick up where you left off as soon as the quota is restored.
I think I'm faster with Claude Code overall, but that's because it's a tradeoff between "it makes me much faster at the stuff I'm not good at" and "it slows me down on the stuff I am good at". Maybe with better prompting skills, I'll be faster overall, but I'm definitely glad I don't have to write all the boilerplate yet again.
It is part of the learning curve discovering that making a non-deterministic system act deterministically based on some vague instructions is actually pretty difficult.
https://en.m.wikipedia.org/wiki/Probably_approximately_corre...
you mean another wrapper ?
There are a lot of low hanging fruit that can be tackled. From helping people stay focused to managing household schedules. I honestly don't even think the product has to be SaSS, a 16GB VRAM GPU, or a well equipped MacBook, can do everything with locally.
I'm not sure what this means; what exactly is an AI assistant SaaS? There are plenty of wrappers around LLMs that you can use, but I'm guessing that a wrapper around (for example) the ChatGPT or Claude API isn't what you had in mind, right?
Can you explain?
Yeah, well it would be the next major step towards human irrelevance.
Or at least, for developers.
The really nice thing about it is that I gave it memory, so a lot of these behaviours are just things you teach it. For example, I only programmed it to be able to read my calendar and add events, and then told it "before you add an event, check for conflicts" and it learned to do that. I really like that sort of "programming by saying stuff" that LLMs enable.
I'm looking forward to seeing where this experiment goes, email me if you want access/want to discuss features. I don't know if I'll open it up to everyone, as LLMs are costly, but maybe I could do a "bring your own key" thing.
Feel free to email me (email in profile) if you'd like to try it out. Right now it only does weather and Google Calendar, but adding new integrations is easy and the interesting thing is the fact that it can learn from you, and will behave like a PA would, while also being proactive and messaging you without you having to message it first.
I did make a prototype a while ago, which I integrated with a hardware device, and that was extremely useful, being able to do things by me teaching it. For example, it only had access to my calendar and its memories, but I told it (in chat) to check for and notify me of conflicts before adding an event, and told it the usual times of some events, so then I'd say "I'm playing D&D on Thursday" and it would reply with "you can't, you have an appointment at 8PM". This sounds simple for a human, but the LLM had to already know what time D&D was, that it's something I can't do during appointments, and that I wanted to be informed of conflicts, which are all things that I didn't have to program, but just instructed the LLM to do.