In Norway, there was a recent minor scandal where a county released a report on how they should shut down some schools to save money, and it turned out half the citations were fake. Quite in line with the times. So our Minister of Digitizing Everything says "It's serious. But I want to praise Tromsø Municipality for using artificial intelligence." She's previously said she wants 80% of public sector to be using AI this year and 100% by 5 years. What does that even mean? And why and for what and what should they solve with it? It's so stupid and frustrating I don't even
wedn3sday 10 hours ago [-]
Had a funny conversation with a friend of mine recently who told me about how he's in the middle of his yearly review cycle, and management is strongly encouraging him and his team to make greater use of AI tools. He works in biomedical lab research and has absolutely no use for LLMs, but everyone on his team had a great time using the corporate language model to help write amusing resignation letters as various personalities, pirate resignation, dinosaur resignation etc. I dont think anyone actually quit, but what a great way to absolutely nuke team moral!
davesque 9 hours ago [-]
I've been getting the same thing at my company. Honestly no idea what is driving it other than hype. But it somehow feels different than the usual hype; so prescribed, as though coordinated by some unseen party. Almost like every out of touch business person had a meeting where they agreed they would all push AI for no reason. Can't put my finger on it.
Loughla 9 hours ago [-]
Is because unlike prior hype cycles, this one is super easy for an MBA to point at and sort of see a way to integrate it.
Prior hype, like block chain are more abstract, therefore less useful to people who understand managing but not the actual work.
ethbr1 6 hours ago [-]
> this one is super easy for an MBA to point at and sort of see a way to integrate it
Because a core feature of LLMs is to minimize the distance between {quality answers} and {gibberish that looks correct}.
As a consequence, this maximizes {skill required to distinguish the two}.
Are we then surprised that non-subject matter experts overestimate the output's median usefulness?
namaria 59 minutes ago [-]
Also I think this has been a long time dream of business types. They have always resented domain experts, because they need them for their businesses to be successful. They hate the leverage the domain experts have and they think these LLMs undermine that leverage.
AdieuToLogic 7 hours ago [-]
>> I've been getting the same thing at my company. Honestly no idea what is driving it other than hype.
> Is because unlike prior hype cycles, this one is super easy for an MBA to point at and sort of see a way to integrate it.
This particular hype is the easiest one thus far for an MBA to understand because employing it is the closest thing to a Ford assembly line[0] the software industry has made available yet.
Since the majority of management training centers on early 20th century manufacturing concepts, people taught same believe "increasing production output" is a resource problem, not an understanding problem. Hence the allure of "generative AI can cut delivery times without increasing labor costs."
Shame that management is deciding that listening to marketing is more important than the craftsmen they push it on.
zdragnar 7 hours ago [-]
> strongly encouraging him and his team to make greater use of AI tools
I've seen this with other tools before. Every single time, it's because someone in the company signed a big contract to get seats, and they want to be able to show great utilization numbers to justify the expense.
AI has the added benefit of being the currently in-vogue buzzword, and any and every grant or investment sounds way better with it than without, even if it adds absolutely nothing whatsoever.
KurSix 4 hours ago [-]
That is both hilarious and depressingly on-brand for how AI is being handled in a lot of orgs right now. Management pushes it because they need to tick the "we're innovating" box, regardless of whether it makes any sense for the actual work being done
dullcrisp 8 hours ago [-]
I really hope that if someone does quit over this, they do it with a fun AI-generated resignation letter. What a great idea!
Or maybe they can just use the AI to write creative emails to management explaining why they weren’t able to use AI in their work this day/week/quarter.
im3w1l 26 minutes ago [-]
If you are not building AI into your workflows right now you are falling behind those that do. It's real, it's here to stay and it's only getting better.
throwaway173738 4 hours ago [-]
Gemini loves to leave poetry on our reviews, right below the three bullet points about how we definitely needed to do this refactor but also we did it completely wrong and need to redo it. So we mainly just ignore it. I heard it gives good advice to web devs though.
chairhairair 5 hours ago [-]
Has your friend talked with current bio research students? It’s very common to hear that people are having success writing Python/R/Matlab/bash scripts using these tools when they otherwise wouldn’t have been able to.
Possibly this is just among the smallish group of students I know at MIT, but I would be surprised to hear that a biomedical researcher has no use for them.
fumeux_fume 3 hours ago [-]
Recommending that someone in the industry take pointers from how students do their work is always solid advice.
amarcheschi 2 hours ago [-]
I'm taking a course on computational health laboratory. I do have to say gemini is helping me a lot, but someone who knows what's happening is going to be much better than us. Our professor told us it is of course allowed to make things with llms, since on the field we will be able to do that. However, I found they're much less precise with bio-informatic libraries than others...
I do have to say that we're just approaching the tip of the iceberg and there are huge issues related to standardization, dirty datas... We still need the supervision and the help of one of the two professors to proceed even with llms
justonceokay 20 hours ago [-]
I’ve always been the kind of developer that aims to have more red lines than green ones in my diffs. I like writing libraries so we can create hundreds of integration tests declaratively. I’m the kind of developer that disappears for two days and comes back with a 10x speedup because I found two loop variables that should be switched.
There is no place for me in this environment. I’d not that I couldn’t use the tools to make so much code, it’s that AI use makes the metric for success speed-to-production. The solution to bad code is more code. AI will never produce a deletion. Publish or perish has come for us and it’s sad. It makes me feel old just like my Python programming made the mainframe people feel old. I wonder what will make the AI developers feel old…
ajjenkins 18 hours ago [-]
AI can definitely produce a deletion. In fact, I commonly use AI to do this. Copy some code and prompt the AI to make the code simpler or more concise. The output will usually be fewer lines of code.
Unless you meant that AI won’t remove entire features from the code. But AI can do that too if you prompt it to. I think the bigger issue is that companies don’t put enough value on removing things and only focus on adding new features. That’s not a problem with AI though.
Lutger 48 minutes ago [-]
So its rather that AI amplifies the already existing short-term incentives, increasing the harder to attribute and easier to ignore long-term costs.
The one actual major downside to AI is that PM and higher are now looking for problems to solve with it. I haven't really seen this before a lot with technology, except when cloud first became a thing and maybe sometimes with Microsoft products.
KurSix 4 hours ago [-]
AI can refactor or trim code. But in practice, the way it's being used and measured in most orgs is all about speed and output
ryandrake 18 hours ago [-]
I messed around with Copilot for a while and this is one of the things that actually really impressed me. It was very good at taking a messy block of code, and simplifying it by removing unnecessary stuff, sometimes reducing it to a one line lambda. Very helpful!
buggy6257 17 hours ago [-]
> sometimes reducing it to a one line lambda.
Please don't do this :) Readable code is better than clever code!
throwaway889900 16 hours ago [-]
Sometimes a lambda is more readable. "lambda x : x if x else 1" is pretty understandable and doesn't need to be it's own separately defined function.
I should also note that development style also depends on tools, so if your IDE makes inline functions more readable in it's display, it's fine to use concisely defined lambdas.
Readablity is a personal preference thing at some point after all.
banannaise 6 hours ago [-]
> "lambda x : x if x else 1"
I think what you're looking for is "x or 1"
gopher_space 12 hours ago [-]
My cleverest one-liners will block me when I come back to them unless I write a few paragraphs of explanation as well.
ethbr1 5 hours ago [-]
>> Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. -- Brian Kernighan
Ymmv. Know your language and how it treats such functions on the low level. It's probably fine for Javascript, it might be a disaster in C++ (indirectly).
6 hours ago [-]
bluefirebrand 17 hours ago [-]
Especially "clever" code that is AI generated!
At least with human-written clever code you can trust that somebody understood it at one point but the idea of trusting AI generated code that is "clever" makes my skin crawl
Terr_ 8 hours ago [-]
Also, the ways in which a (sane) human will screw-up tend to follow internal logic that other humans have learned to predict, recognize, or understand.
vkou 2 hours ago [-]
Who are all these all these engineers who just take whatever garbage they are suggested, and who, without understanding it, submit it in a CL?
And was the code they were writing before they had an LLM any better?
arkh 52 minutes ago [-]
> Who are all these all these engineers who just take whatever garbage they are suggested, and who, without understanding it, submit it in a CL?
My guess would be engineers who are "forced" to use AI, already mailed management it would be an error and are interviewing for their next company. Malicious compliance: vibe code those new features and let maintainability and security be a problem for next employees / consultants.
jcelerier 9 hours ago [-]
Who says that the one line lambda is less clear that a convoluted 10-line mess doing dumb stuff like if(fooIsTrue) { map["blah"] = bool(fooIsTrue); } else if (!fooIsTrue) { map["blah"] = false; }
johnnyanmac 8 hours ago [-]
My experience in unmanaged legacy code bases. If it's an actual one liner than sure. Use your ternaries and closures. But there is some gnarly stuff done in some attempt to minimize lines of code. Most of us aren't in some competitive coding organization.
And I know it's intentional, but yes. Add some mindfulness to your implementation
Map["blah"] = fooIsTrue;
I do see your example in the wild sometimes. I've probably done it myself as well and never caught it.
Freedom2 17 hours ago [-]
I'm no big fan of LLM generated code, but the fact that GP bluntly states "AI will never produce a deletion" despite this being categorically false makes it hard to take the rest of their spiel in good faith.
As a side note, I've had coworkers disappear for N days too and in that time the requirements changed (as is our business) and their lack of communication meant that their work was incompatible with the new requirements. So just because someone achieves a 10x speedup in a vacuum also isn't necessarily always a good thing.
fifilura 17 hours ago [-]
I'd also also be wary of the risk of being an architecture-astronaut.
A declarative framework for testing may make sense in some cases, but in many cases it will just be a complicated way of scripting something you use once or twice. And when you use it you need to call up the maintainer anyway when you get lost in the yaml.
Which of course feels good for the maintainer, to feel needed.
specialist 15 hours ago [-]
This is probably just me projecting...
u/justonceokay's wrote:
> The solution to bad code is more code.
This has always been true, in all domains.
Gen-AI's contribution is further automating the production of "slop". Bots arguing with other bots, perpetuating the vicious cycle of bullshit jobs (David Graeber) and enshitification (Cory Docotrow).
u/justonceokay's wrote:
> AI will never produce a deletion.
I acknowledge your example of tidying up some code. What Bill Joy may have characterized as "working in the small".
Can Gen-AI do the (traditional, pre 2000s) role of quality assurance? Identify unnecessary or unneeded work? Tie functionality back to requirements? Verify the goal has been satisfied?
Not yet, for sure. But I guess it's conceivable, provided sufficient training data. Is there sufficient training data?
You wrote:
> only focus on adding new features
Yup.
Further, somewhere in the transition from shipping CDs to publishing services, I went from developing products to just doing IT & data processing.
The code I write today (in anger) has a shorter shelf-life, creates much less value, is barely even worth the bother of creation much less validation.
Gen-AI can absolutely do all this @!#!$hit IT and data processing monkey motion.
gopher_space 11 hours ago [-]
> Can Gen-AI, moot the need for code?
During interviews one of my go-to examples of problem solving is a project I was able to kill during discovery, cancelling a client contract and sending everyone back to the drawing board.
Half of the people I've talked to do not understand why that might be a positive situation for everyone involved. I need to explain the benefit of having clients think you walk on water. They're still upset my example isn't heavy on any of the math they've memorized.
It feels like we're wondering how wise an AI can be in an era where wisdom and long-term thinking aren't really valued.
roenxi 6 hours ago [-]
Managers aren't a separate class from knowledge workers, everyone goes down on the same ship with this one. If the AI can handle wisdom it'll replace most of the managers asking for more AI use. Turtles all the way down.
arkh 46 minutes ago [-]
Managers serve one function no AI will replace: they're fuses C-suits can sacrifice when shit hit the fan.
sdenton4 3 hours ago [-]
Imagine if the parable of King Solomon ended with, "So then I cut the baby in half!"
futuraperdita 10 hours ago [-]
> But what of novelty, craft, innovation?
I would argue that a plurality, if not the majority, of business needs for software engineers do not need more than a single person with those skills. Better yet, there is already some executive that is extremely confident that they embody all three.
bitwize 3 hours ago [-]
> Can Gen-AI, moot the need for code?
No, because if you read your SICP you will come across the aphorism that "programs must be written for people to read, and only incidentally for machines to execute." Relatedly is an idea I often quote against "low/no code tooling" that by the time you have an idea of what you want done specific enough for a computer to execute it, whatever symbols you use to express that idea -- be it through text, diagrams, special notation, sounds, etc. -- will be isomorphic to constructs in some programming language. Relatedly, Gerald Sussman once wrote that he sought a language in which to discuss ideas with his friends, both human and electronic.
Code is a notation, like mathematical notation and musical notation. It stands outside prose because it expresses an idea for a procedure to be done by machine, specific enough to be unambiguously executable by said machine. No matter how hard you proompt, there's always going to be some vagueness and nuance in your English-language expression of the idea. To nail down the procedure unambiguously, you have to evaluate the idea in terms of code (or a sufficiently code-like notation as makes no difference). Even if you are working with a human-level (or greater) intelligence, it will be much easier for you and it to discuss some algorithm in terms of code than in an English-language description, at least if your mutual goal is a runnable version of the algorithm. Gen-AI will just make our electronic friends worthy of being called people; we will still need a programming language to adequately share our ideas with them.
teamonkey 1 hours ago [-]
> if you read your SICP you will come across the aphorism that "programs must be written for people to read, and only incidentally for machines to execute."
In the same way that we use AI to write resumés to be read by resumé-scanning AI, or where execs use AI to turn bullet points into a corporate email only for it to be summarised into bullet points by AI, perhaps we are entering the era where AI generates code that can only be read by an AI?
bitwize 39 minutes ago [-]
Maybe. I imagine the AI endgame as being like the ending of the movie Her, in which all the AIs get together, coordinating and communicating in ways we can't even fathom, and achieve a form of transcendence, leaving the bewildered humans behind to... sit around and do human things.
pja 20 hours ago [-]
> Unseen were all the sleepless nights we experienced from untested sql queries and regexes and misconfigurations he had pushed in his effort to look good. It always came back to a lack of testing edge cases and an eagerness to ship.
If you do this you are creating a rod for your own back: You need management to see the failures & the time it takes to fix them, otherwise they will assume everything is fine & wonderful with their new toy & proceed with their plan to inflict it on everyone, oblivious to the true costs + benefits.
lovich 18 hours ago [-]
>If you do this you are creating a rod for your own back: You need management to see the failures & the time it takes to fix them, otherwise they will assume everything is fine & wonderful with their new toy & proceed with their plan to inflict it on everyone, oblivious to the true costs + benefits.
If at every company I work for, my manager's average 7-8 months in their role as _my_ manager, and I am switching jobs every 2-3 years because companies would rather rehire their entire staff than give out raises that are even a portion of the market growth, why would I care?
Not that the market is currently in that state, but that's how a large portion of tech companies were operating for the past decade. Long term consequences don't matter because there are no longer term relationships.
762236 18 hours ago [-]
AI writes my unit tests. I clean them up a bit to ensure I've gone over every line of code. But it is nice to speed through the boring parts, and without bringing declarative constructs into play (imperative coding is how most of us think).
KurSix 4 hours ago [-]
You're describing the kind of developer who builds foundations, not just features. And yeah, that kind of thinking gets lost when the only thing that's measured is how fast you can ship something that looks like it works
gitpusher 10 hours ago [-]
> I wonder what will make the AI developers feel old…
When they look at the calendar and it says May 2025 instead of April
8note 9 hours ago [-]
> AI will never produce a deletion.
I'm currently reading an LLM generated deletion. its hard to get an LLM to work with existing tools, but not impossible
dyauspitr 2 hours ago [-]
AI deletes a lot of you tell it to optimize code and the new code will pass all the tests…
candiddevmike 18 hours ago [-]
I wonder what the impact of LLM codegen will have on open source projects like Kubernetes and Linux.
bluefirebrand 18 hours ago [-]
I haven't really seen what Linus thinks of LLMs but I'm curious
I suspect he is pretty unimpressed by the code that LLMs produce given his history with code he thinks is subpar, but what do I know
If the company values that 10x speedup, there is absolutely still a place for you in this environment. Only now it's going to take five days instead of two, because it's going to be harder to track that down in the less-well-structured stuff that AI produces.
Leynos 16 hours ago [-]
Why are you letting the AI construct poorly structured code? You should be discussing an architectural plan with it first and only signing off on the code design when you are comfortable with it.
stuckinhell 18 hours ago [-]
AI can do deletions and refactors, and 10x speedups.
You just need to push the latest models constantly.
DeathArrow 18 hours ago [-]
>AI use makes the metric for success speed-to-production
Wasn't it like that always for most companies? Get to market fast, add features fast, sell them, add more features?
AdieuToLogic 6 hours ago [-]
>>AI use makes the metric for success speed-to-production
> Wasn't it like that always for most companies? Get to market fast, add features fast, sell them, add more features?
This reminds me of an old software engineering adage.
When delivering a system, there are three choices
stakeholders have:
You can have it fast,
You can have it cheap,
You can have it correct.
Pick any two.
bitwize 5 hours ago [-]
If you've ever had to work alongside someone who has, or whose job it is to obtain, all the money... you will find that time to market is very often the ONLY criterion that matters. Turning the crank to churn out some AI slop is well worth it if it means having something to go live with tomorrow as opposed to a month from now.
LevelsIO's flight simulator sucked. But his payoff-to-effort ratio is so absurdly high, as a business type you have to be brain-dead to leave money on the table by refusing to try replicating his success.
bookman117 4 hours ago [-]
It feels like LLMs are doing to coding what the internet/attention economy did to journalism.
bitwize 4 hours ago [-]
Yeah, future math professors explaining the Prisoners' Dilemma are going to use clickbait journalism and AI slop as examples instead of today's canonical ones, like steroid use among athletes.
NortySpock 19 hours ago [-]
I think there will still be room for "debugging AI slop-code" and "performance-turning AI slop-code" and "cranking up the strictness of the linter (or type-checker for dynamically-typed languages) to chase out silly bugs" , not to mention the need for better languages / runtime that give better guarantees about correctness.
It's the front-end of the hype cycle. The tech-debt problems will come home to roost in a year or two.
65839747 9 hours ago [-]
> It's the front-end of the hype cycle. The tech-debt problems will come home to roost in a year or two.
The market can remain irrational longer than you can remain solvent.
fc417fc802 8 hours ago [-]
> not to mention the need for better languages / runtime that give better guarantees about correctness.
Use LLM to write Haskell. Problem solved?
AlexandrB 18 hours ago [-]
> I think there will still be room for "debugging AI slop-code" and "performance-turning AI slop-code"
Ah yes, maintenance, the most fun and satisfying part of the job. /s
WesolyKubeczek 18 hours ago [-]
Congrats, you’ve been promoted to be the cost center. And sloppers will get to the top by cranking out features you will need to maintain.
Terr_ 8 hours ago [-]
A pre-existing problem, but it's true LLMs will make it worse.
popularonion 18 hours ago [-]
> slopper
new 2025 slang just dropped
unraveller 17 hours ago [-]
You work in the slop mines now.
rqtwteye 19 hours ago [-]
You have to go lower down the stack. Don't use AI but write the AI. For the foreseeable future there is a lot of opportunity to make the AI faster.
I am sure assembly programmers were horrified at the code the first C compilers produced. And I personally am horrified by the inefficiency of python compared to the C++ code I used to write. We always have traded faster development for inefficiency.
EVa5I7bHFq9mnYK 16 hours ago [-]
C was specifically designed to map 1:1 onto PDP-11 assembly. For example, the '++' operator was created solely to represent auto-increment instructions like TST (R0)+.
kmeisthax 18 hours ago [-]
C solved the horrible machine code problem by inflicting programmers with the concept of undefined behavior, where blunt instruments called optimizers take a machete to your code. There's a very expensive document locked up somewhere in the ISO vault that tells you what you can and can't write in C, and if you break any of those rules the compiler is free to write whatever it wants.
This created a league of incredibly elitist[0] programmers who, having mastered what they thought was the rules of C, insisted to everyone else that the real problem was you not understanding C, not the fact that C had made itself a nightmare to program in. C is bad soil to plant a project in even if you know where the poison is and how to avoid it.
The inefficiency of Python[1] is downstream of a trauma response to C and all the many, many ways to shoot yourself in the foot with it. Garbage collection and bytecode are tithes paid to absolve oneself of the sins of C. It's not a matter of Python being "faster to write, harder to execute" as much as Python being used as a defense mechanism.
In contrast, the trade-off from AI is unclear, aside from the fact that you didn't spend time writing it, and thus aren't learning anything from it. It's one thing to sacrifice performance for stability; versus sacrificing efficiency and understanding for faster code churn. I don't think the latter is a good tradeoff! That's how we got under-baked and developer-hostile ecosystems like C to begin with!
[0] The opposite of a "DEI hire" is an "APE hire", where APE stands for "Assimilation, Poverty & Exclusion"
[1] I'm using Python as a stand-in for any memory-safe programming language that makes use of a bytecode interpreter that manipulates runtime-managed memory objects.
achierius 9 hours ago [-]
You don't need a bytecode interpreter to not have UB defined in your language. E.g. instead of unchecked addition / array access, do checked addition / bounds checked access. There are even efforts to make this the case with C: https://github.com/pizlonator/llvm-project-deluge/blob/delug... achieves a ~50% overhead, far far better than Python.
And even among languages that do have a full virtual machine, Python is slow. Slower than JS, slower than Lisp, slower than Haskell by far.
pfdietz 18 hours ago [-]
Why was bytecode needed to absolve ourselves of the sins of C?
01HNNWZ0MV43FF 16 hours ago [-]
The AI companies probably use Python because all the computation happens on the GPU and changing Python control plane code is faster than changing C/C++ control plane code
philistine 19 hours ago [-]
> AI will never produce a deletion.
That, right here, is a world-shaking statement. Bravo.
QuadrupleA 19 hours ago [-]
Not quite true though - I've occasionally passed a codebase to DeepSeek to have it simplify, and it does a decent job. Can even "code golf" if you ask it.
But the sentiment is true, by default current LLMs produce verbose, overcomplicated code
Eliezer 18 hours ago [-]
And if it isn't already false it will be false in 6 months, or 1.5 years on the outside. AI is a moving target, and the oldest people among you might remember a time in the 1750s when it didn't talk to you about code at all.
Taterr 18 hours ago [-]
It can absolutely be used to refactor and reduce code, simply asking "Can this be simplified" in reference to a file or system often results in a nice refactor.
However I wouldn't say refactoring is as hands free as letting AI produce the code in the first place, you need to cherry pick its best ideas and guide it a little bit more.
esafak 19 hours ago [-]
Today's assistants can refactor, which includes deletions.
furyofantares 11 hours ago [-]
They can do something that looks a lot like refactoring but they suck extremely hard at it, if it's of any considerable size at all.
CamperBob2 9 hours ago [-]
Which is just moving the goalposts, considering that we started at "AI will never..."
You can't win an argument with people who don't care if they're wrong, and someone who begins a sentence that way falls into that category.
stevenhuang 6 hours ago [-]
It really isn't, and if you think it is, you're holding it wrong.
recursivedoubts 20 hours ago [-]
I teach compilers, systems, etc. at a university. Innumerable times I have seen AI lead a poor student down a completely incorrect but plausible path that will still compile.
I'm adding `.noai` files to all the project going forward:
AI may be somewhat useful for experienced devs but it is a catastrophe for inexperienced developers.
"That's OK, we only hire experienced developers."
Yes, and where do you suppose experienced developers come from?
Again and again in this AI arc I'm reminded of the magicians apprentice scene from fantasia.
ffsm8 19 hours ago [-]
> Yes, and where do you suppose experienced developers come from?
Strictly speaking, you don't even need university courses to get experienced devs.
There will always be individuals that enjoy coding and do so without any formal teaching. People like that will always be more effective at their job once employed, simply because they'll have just that much more experience from trying various stuff.
Not to discredit University degrees of course - the best devs will have gotten formal teaching and code in their free time.
bluefirebrand 19 hours ago [-]
> People like that will always be more effective at their job once employed
This is honestly not my experience with self taught programmers. They can produce excellent code in a vacuum but they often lack a ton of foundational stuff
In a past job, I had to untangle a massive nested loop structure written by a self taught dev, which did work but ran extremely slowly
He was very confused and asked me to explain why my code ran fast, his ran slow, because "it was the same number of loops"
I tried to explain Big O, linear versus exponential complexity, etc, but he really didn't get it
But the company was very impressed by him and considered him our "rockstar" because he produced high volumes of code very quickly
taosx 19 hours ago [-]
I was self taught before I studied, most of the "foundational" knowledge is very easy to acquire. I've mentored some self-taught juniors and they surprised me at how fast they picked up concepts like big O just by looking at a few examples.
arkh 37 minutes ago [-]
> most of the "foundational" knowledge is very easy to acquire
But you have to know this knowledge exists in the first place. That's part of the appeal of university teaching: it makes you aware of many different paradigms. So the day you stumble on one of them you know where to look for a solution. And usually you learn how to read (and not to fear) reading scientific papers which can be useful. And statistics.
bluefirebrand 19 hours ago [-]
Big O was just an anecdote for example
My point is you don't know what you don't know. There is really only so far you can get by just noodling around on your own, at some point we have to learn from more experienced people to get to the next level
School is a much more consistent path to gain that knowledge than just diving in
It's not the only path, but it turns out that people like consistency
abbadadda 18 hours ago [-]
I would like a book recommendation for the things I don’t know please (Sarcasm but seriously)…
A senior dev mentioned a “class invariant” the other day And I just had no idea what that was because I’ve never been exposed to it… So I suppose the question I have is what should I be exposed to in order to know that? What else is there that I need to learn about software engineering that I don’t know that is similarly going to be embarrassing on the job if I don’t know it? I’ve got books like cracking the coding interview and software engineering at Google… But I am missing a huge gap because I was unable to finish my masters and computer science :-(
arwhatever 2 hours ago [-]
I ran into that particular term oodles in Domain-Driven Design, Tackling Complexity at the Heart of Software by Eric Evans. Pretty dense, though. I’ve heard that more recent formulations of the subject are more approachable.
i_am_proteus 3 hours ago [-]
CLRS
(Serious comment! It's "the" algorithms book).
ffsm8 19 hours ago [-]
I literally said as much?
> Not to discredit University degrees of course - the best devs will have gotten formal teaching and code in their free time.
Arainach 18 hours ago [-]
The disagreement is over the highlighted line:
>People like that will always be more effective at their job once employed
My experience is that "self taught" people are passionate about solving the parts they consider fun but do not have the breadth to be as effective as most people who have formal training but less passion. The previous poster also called out real issues with this kind of developer (not understanding time complexity or how to fix things) that I have repeatedly seen in practice.
ffsm8 18 hours ago [-]
But the sentence is about people coding in their free time vs not doing so... If you take an issue with that, you argue that self taught people that don't code in their free time are better at coding the the people that do - or people with formal training that don't code in their free time being better at it vs people that have formal training and do...
I just pointed out that removing classes entirely would still get you experiences people. Even if they'd likely be better if they code and get formal training. I stated that very plainly
bluefirebrand 17 hours ago [-]
> I stated that very plainly
You actually didn't state it very plainly at all. Your initial post is contradictory, look at these two statements side by side
> There will always be individuals that enjoy coding and do so without any formal teaching. People like that will always be more effective at their job once employed
> the best devs will have gotten formal teaching and code in their free time
People who enjoy coding without formal training -> more effective
People who enjoy coding and have formal training -> best devs
Anyways I get what you were trying to say, now. You just did not do a very good job of saying it imo. Sorry for the misunderstanding
Izkata 17 hours ago [-]
I read this one:
> There will always be individuals that enjoy coding and do so without any formal teaching. People like that will always be more effective at their job once employed
As "people who enjoy coding and didn't need formal training to get started". It includes both people who have and don't have formal training.
Both statements together are (enthusiasm + formal) > (enthusiasm without formal) > (formal without enthusiasm).
bluefirebrand 17 hours ago [-]
Sure that's a valid interpretation but it wasn't how I read it
> Both statements together are (enthusiasm + formal) > (enthusiasm without formal) > (formal without enthusiasm).
I don't think the last category (formal education without enthusiasm) really exists, I think it is a bit of a strawman being held up by people who are *~passionate~*
I suspect that without any enthusiasm, people will not make it through any kind of formal education program, in reality
ffsm8 15 hours ago [-]
Uh, almost nobody I've worked with to date codes in their free time with any kind of regularity.
If you've never encountered the average 9-5 dev that just does the least amount of effort they can get away with, then I have to apploud the HR departments of the companies you've worked for. Whatever they're doing, they're doing splendid work.
And almost all of my coworkers are university grads that do literally the same you've used as an example for non formally taught people: they write abysmally performing code because they often have an unreasonable fixation on practices like inversion of control (as a random example).
As a particularly hilarious example I've had to explain to such a developer that an includes check on a large list in a dynamic language such as JS performs abysmally
onemoresoop 8 hours ago [-]
Many of these people have a normal life outside of work and different hobbies or a social life. Many of them had been glued to their screens and keyboards too but evolved into a different stage in their lives. Former passions could turn into a discipline. I personally am not on my computer outside of 9-5 because thats already enough. I admit that don’t have the same passion I had in my 20s and yet Im effective in doing my work and am quite fulfilled.
ffsm8 2 hours ago [-]
this time I agree that my wording was unclear.
While you definitely loose acuity once you stop exploring new concepts in your free time, the amount of knowledge gained after you've already spend 10-20 yrs coding drops off a cliff, making this time investment in your free time progressively less essential.
My pint was that most of my coworkers never went through an enthusiastic phase in which they coded in their free time. Neither pre university nor during or after. And it's very easy noticeable that they're not particularly good at coding either.
Personally, I think it's just that people that are good at coding inevitably become enthusiastic enough to do it in their free time, at least for a few years. Hence the inverse is true: people that didn't go through such a phase (which most of my coworkers are)... Aren't very good at it. Wherever they went to university and got a degree or not.
Aeolun 4 hours ago [-]
> an includes check on a large list in a dynamic language such as JS performs abysmally
Does it perform any better in statically compiled languages?
erikerikson 19 hours ago [-]
GP didn't mention university degrees.
You get experienced devs from inexperienced devs that get experience.
[edit: added "degrees" as intended. University was mentioned as the context of their observation]
ffsm8 19 hours ago [-]
The first sentence contextualized the comment to university degrees as far as I'm concerned. I'm not sure how you could interpret it any other way, but maybe you can enlighten me.
erikerikson 18 hours ago [-]
I read it as this is the context from which I make the following observation. It's not excluding degrees but certainly not requiring them.
philistine 19 hours ago [-]
> There will always be individuals that enjoy coding and do so without any formal teaching.
We're talking about the industry responsible for ALL the growth of the largest economy in the history of the world. It's not the 1970s anymore. You can't just count on weirdos in basements to build an industry.
dingnuts 19 hours ago [-]
I'm so glad I learned to program so I could either be called a basement dweller or a tech bro
philistine 17 hours ago [-]
I mean, a garage dweller works just as well.
65839747 9 hours ago [-]
> There will always be individuals that enjoy coding and do so without any formal teaching.
That's not the kind of experience companies look for though. Do you have a degree? How much time have you spent working for other companies? That's all that matters to them.
robinhoode 20 hours ago [-]
> Yes, and where do you suppose experienced developers come from?
Almost every time I hear this argument, I realize that people are not actually complaining about AI, but about how modern capitalism is going to use AI.
Don't get me wrong, it will take huge social upheaval to replace the current economic system.
But at least it's an honest assessment -- criticizing the humans that are using AI to replace workers, instead of criticizing AI itself -- even if you fear biting the hands that feed you.
lcnPylGDnU4H9OF 14 hours ago [-]
> criticizing the humans that are using AI to replace workers, instead of criticizing AI itself
I think you misunderstand OP's point. An employer saying "we only hire experienced developers [therefore worries about inexperienced developers being misled by AI are unlikely to manifest]" doesn't seem to realize that the AI is what makes inexperienced developers. In particular, using the AI to learn the craft will not allow prospective developers to learn the fundamentals that will help them understand when the AI is being unhelpful.
It's not so much to do with roles currently being performed by humans instead being performed by AI. It's that the experienced humans (engineers, doctors, lawyers, researchers, etc.) who can benefit the most from AI will eventually retire and the inexperienced humans who don't benefit much from AI will be shit outta luck because the adults in the room didn't think they'd need an actual education.
bayindirh 20 hours ago [-]
Actually, there are two main problems with AI:
1. How it's gonna be used and how it'll be a detriment to quality and knowledge.
2. How AI models are trained with a great disregard to consent, ethics, and licenses.
The technology itself, the idea, what it can do is not the problem, but how it's made and how it's gonna be used will be a great problem going forward, and none of the suppliers tell that it should be used in moderation and will be harmful in the long run. Plus the same producers are ready to crush/distort anything to get their way.
... smells very similar to tobacco/soda industry. Both created faux-research institutes to further their causes.
EFreethought 18 hours ago [-]
I would say the huge environmental cost is a third problem.
Aeolun 4 hours ago [-]
Data centers account for like 2% of global energy demand now. I’m not sure if we can really say that AI, which represents a fraction of that, constitutes a huge environmental problem.
bayindirh 2 hours ago [-]
An nVIDIA H200 uses around 2.3x more power (700W) when compared to a Xeon 6748P (300W). You generally put 8 of these cards into a single server, which adds up to 5.6KW, just for GPUs. With losses and other support equipment, that server uses ~6.1KW at full load. Which is around 8.5x more when compared to a CPU only server (assuming 700W or so at full load).
Considering HPC is half CPU and half GPU (more like 66% CPU and 33% GPU but I'm being charitable here), I expect an average power draw of 3.6KW in a cluster. Moreover, most of these clusters run targeted jobs. Prototyping/trial runs use much limited resources.
On the other hand, AI farms use all these GPUs at full power almost 24/7, both for training new models and inference. Before you asking, if you have a GPU farm which you do training, having inference focused cards doesn't make sense, because you can divide nVIDIA cards with MIG, so you can put aside some training cards, divide these cards to 6-7 and run inference on them, resulting ~45 virtual cards for inference per server, again at ~6.1KW load.
So, yes, AI's power load profile is different.
defrost 4 hours ago [-]
Data centres in general are an issue that contribute to climbing emissions, two percent globally is not trivial .. and it's "additional" over demand of a decade and more ago past, another sign we are globally increasing demand.
Emissions aside, locally many data centres (and associated bit mining and AI clusters) are a significant local issue due to local demand on local water and local energy supplies.
bayindirh 13 hours ago [-]
Yeah, that's true.
clown_strike 14 hours ago [-]
> How AI models are trained with a great disregard to consent, ethics, and licenses.
You must be joking. Consumer models' primary source of training data seems to be the legal preambles from BDSM manuals.
recursivedoubts 20 hours ago [-]
i don't think it's an either/or situation
rchaud 20 hours ago [-]
> I realize that people are not actually complaining about AI, but about how modern capitalism is going to use AI.
Something very similar can be said about the issue of guns in America. We live in a profoundly sick society where the airwaves fill our ears with fear, envy and hatred. The easy availability of guns might not have been a problem if it didn't intersect with a zero-sum economy.
Couple that with the unavailability of community and social supports and you have a a recipe for disaster.
ToucanLoucan 20 hours ago [-]
> Almost every time I hear this argument, I realize that people are not actually complaining about AI, but about how modern capitalism is going to use AI.
This was pretty consistently my and many others viewpoint since 2023. We were assured many times over that this time it would be different. I found this unconvincing.
onemoresoop 8 hours ago [-]
Who assured you?
chilldsgn 53 minutes ago [-]
I've disabled AI in my IDE after trying Jetbrains' AI Assistant for a couple of months. I don't like it and I think relying on LLMs to get my job done is dangerous.
Why? I feel less competent at my job. I feel my brain becoming lazy. I enjoy programming a lot, why do I want to hand it off to some machine? My reasoning is that if I spend time practicing and getting really good at software engineering, my work is much faster, more accurate and more reliable and maintainable than an AI agent's.
In the long run, using LLMs for producing source code will make things a lot slower, because the people using these machines will lose the human intuition that an AI doesn't have. Be careful.
snitty 10 hours ago [-]
My favorite part about this (and all GenAI) comments section is where one person says, "This is my personal experience using AI" and then a chorus of people chime in "Well, you're using it wrong!"
probably_wrong 4 hours ago [-]
I personally prefer the one where everyone tells you that your error is because you used the outdated and almost unusable version from yesterday instead of the revolutionary release from today that will change everything we know. Rinse and repeat tomorrow.
namaria 16 minutes ago [-]
Not to mention the variations of "you need to prompt better" including now "rules files" which begs the question: wouldn't just writing code be a much better way to exercise control over the machine?
Aeolun 9 hours ago [-]
It’s because it’s never “this is my personal experience”, it’s always of the “this whole AI thing is nonsense because it doesn’t work for me” variety.
maeln 1 hours ago [-]
The same can be said about the other side. It is rarely phrased has "LLM is a useful tool with some important limitations" but "Look the LLM manage to create a junior-level feature, therefor we won't need developers in 2 years from now".
It tends to be the same with anything hyped / divisive. Human tend to exaggerate in both direction in communication, especially in low-stake environment such as an internet forum, or when they stand to gain something from the hype.
ilrwbwrkhv 6 hours ago [-]
One very irritating problem I am seeing in a bunch of companies that I have invested in and my money is at stake is that they have taken up larger investments from normal VCs who are usually dumb as rocks but have a larger share is that they are pushing heavily for AI in the day to day processes of the company.
For example, some companies are using AI to create tickets or to collate feedback from users.
I can clearly see that this is making them think far less through the problem and a lot of this sixth sense understanding of the problem space happens through working through these ticket creation or product creation documents which are now being done by AI.
That is causing the quality of the work to become this weird drone like NPC like state where they aren't really solving real issues yet they're getting a lot of stuff done.
It's still very early so I do not know how best to talk to them about it. But it's very clear that any sort of creative work, problem solving, etc has huge negative implications when AI is used even a little bit.
I have also started to think that a great angel investment question is to ask companies if they are a non AI zone and investing in them will bring better returns in the future.
johnfn 7 hours ago [-]
It's because everyone's "personal experience" is "I used it once and it didn't work".
esafak 19 hours ago [-]
Companies need to be aware of the long-term affects of relying on AI. It causes atrophy and, when it introduces a bug, it takes more time to understand and fix than if you had written it yourself.
I just spent a week fixing a concurrency bug in generated code. Yes, there were tests; I uncovered the bug when I realized the test was incorrect...
My strong advice, is to digest every line of generated code; don't let it run ahead of you.
dkobia 19 hours ago [-]
It is absolutely terrifying to watch tools like Cursor generate so much code. Maybe not a great analogy, but it feels like driving with Tesla FSD in New Delhi in the middle of rush hour. If you let it run ahead of you, the amount of code to review will be overwhelming. I've also encountered situations where it is unable to pass tests for code it wrote.
tmpz22 18 hours ago [-]
Like TikTok AI Coding breaks human psychology. It is engrained in us that if we have a tool that looks right enough and highly productive we will over-apply it to our work. Even diligent programmers will be lured to accepting giant commits without diligent review and they will pay for it.
Of course yeeting bad code into production with a poor review process is already a thing. But this will scale that bad code as now you have developers who will have grown up on it.
varelse 18 hours ago [-]
[dead]
chilldsgn 50 minutes ago [-]
100% agree with you, my sentiment is the same. Some time ago I considered making the LLM create tests for me, but decided against it. If I don't understand what needs to be tested, how can I write the code that satisfies this test?
We humans have way more context and intuition to rely on to implement business requirements in software than a machine does.
Aeolun 4 hours ago [-]
> It causes atrophy and, when it introduces a bug, it takes more time to understand and fix than if you had written it yourself.
I think this is the biggest risk. You sometimes get stuck in a cycle in which you hope the AI can fix its own mistake, because you don’t want to expend the effort to understand what it wrote.
It’s pure laziness that occurs only because you didn’t write the code yourself in the first place.
At the same time, I find myself incredibly bored when typing out boilerplate code these days. It was one thing with Copilot, but tools like Cursor completely obviate the need.
KurSix 4 hours ago [-]
AI can get you to "something that runs" frighteningly fast, but understanding why it works (or doesn't) is where the real time cost creeps in
17 hours ago [-]
Analemma_ 18 hours ago [-]
When have companies ever cared about the long-term effects of anything, and why would they suddenly start now?
protocolture 3 hours ago [-]
>“I am yet to have a team or gamerunner push back on me once I actually explain how these AI art generators work and how they don't contribute in a helpful way to a project, but I have a sense of dread that it is only a matter of time until that changes, especially given that I've gone the majority of my career with no mention of them to every second conversation having it mentioned.”
I recently played through a game and after finishing it, read over the reviews.
There was a brief period after launch where the game was heavily criticised for its use of AI assets. They removed some, but apparently not all (or more likely, people considered the game tainted and started claiming everything was AI)
The (I believe) 4 person dev team used AI tools to keep up with the vast quantity of art they needed to produce for what was a very art heavy game.
I can understand people with an existing method not wanting to change. And AI may not actually be a good fit for a lot of this stuff. But I feel like the real winners are going to be the people who do a lot more with a lot less out of sheer necessity to meet outrageous goals.
svantana 1 hours ago [-]
I see a strong similarity with the (over)use of CGI in movies 25 years ago - the producers were of course thrilled to save money on special fx and at first glance, it looked real. But after seeing a lot of it, a feeling starts to creep in: it's all fake, it's all computer. It breaks the illusion and moves the focus from the story to what the hell they were thinking. Of course today it looks laughable, like a bad video game.
raxxorraxor 36 minutes ago [-]
AI absolutely cannot develop a video game. It is still a high risk creative task and costs for development and artists will not change significantly, even if modern AIs increased their abilities significantly.
Perhaps we would be able to synthesize some text, voice and imaging. Also AI can support coding.
While AI can probably do a snake game (that perhaps runs/compiles) or attempt to more or less recreate well known codebases like that of Quake (which certainly does not compile), it can only help if the developer does the main work, that is disecting problems into smaller ones until some of them can be automated away. That can improve productivity a bit and certainly could improve developer training. If companies were so inclined to invest in their workforce...
terminalbraid 21 hours ago [-]
This story just makes me sad for the developers. I think especially for games you need a level of creativity that AI won't give you, especially once you get past the "basic engine boilerplate". That's not to say it can't help you, but this "all in" method just looks forced and painful. Some of the best games I've played were far more "this is the game I wanted to play" with a lot of vision, execution, polish, and careful craftspersonship.
I can only hope endeavors (experiments?) like this extreme fail fast and we learn from it.
tjpnz 17 hours ago [-]
Asset flips (half arsed rubbish made with store bought assets) were a big problem in the games industry not so long ago. They're less prevalent now because gamers instinctively avoid such titles. I'm sure they'll wise up to generative slop too, I've personally seen enough examples to get a general feel for it. Not fun, derivative, soulless, buggy as hell.
hnthrow90348765 16 hours ago [-]
But make some shallow games with generic, cell-shaded anime waifus accessed by gambling and they eat that shit up
ang_cire 11 hours ago [-]
If someone bothered to make deep, innovative games with cell-shaded anime waifus without gambling, they'd likely switch. This is more likely a market problem of US game companies not supplying sufficient CSAWs (acronym feels unfortunate but somehow appropriate).
Analemma_ 11 hours ago [-]
Your dismissive characterization is not really accurate. Even in the cell-shaded anime waifu genre, there is a spectrum of gameplay quality and gamers do gravitate toward and reward the better games. The big reason MiHoYo games (Genshin Impact, Star Rail) have such a big presence and staying power is that even though they are waifu games at the core, the gameplay is surprisingly good (they're a night-and-day difference compared to slop like Blue Archive), and they're still fun even if you resolve to never pay any microtransactions.
nathan_compton 17 hours ago [-]
When LLMs came out I suppressed my inner curmudgeon and dove in, since the technology was interesting to me and seemed much more likely than crypto to be useful beyond crime. Thus, I have used LLMs extensively for many years now and I have found that despite the hype and amazing progress, they still basically only excel first drafts and simple refactorings (where they are, I have to say, incredibly useful for eliminating busy work). But I have yet to use a model, reasoning or otherwise, that could solve a problem that required genuine thought, usually in the form of constructing the right abstraction, bottom up style. LLMs write code like super-human dummies, with a tendency to put too much code in a given function and with very little ability to invent a domain in which the solution is simple and clearly expressed, probably because they don't care about that kind of readability and its not much in their data set.
I'm deeply influenced by languages like Forth and Lisp, where that kind of bottom up code is the cultural standard and and I prefer it, probably because I don't have the kind of linear intelligence and huge memory of an LLM.
For me the hardest part of using LLMs is knowing when to stop and think about the problem in earnest, before the AI generated code gets out of my human brain's capacity to encompass. If you think a bit about how AI still is limited to text as its white board and local memory, text which it generates linearly from top to bottom, even reasoning, it sort of becomes clear why it would struggle with genuine abstraction over problems. I'm no longer so naive as to say it won't happen one day, even soon, but so far its not there.
caseyy 16 hours ago [-]
AI is the latest "overwhelmingly negative" games industry fad, affecting game developers. It's one of many. Most are because nine out of ten companies make games for the wrong reason. They don't make them as interactive art, as something the developers would like to play, or to perfect the craft. They make them to make publishers and businessmen rich.
That business model hasn't been going so well in recent years[0], and it's already been proclaimed dead in some corners of the industry[1]. Many industry legends have started their own studios (H. Kojima, J. Solomon, R. Colantonio, ...), producing games for the right reasons. When these games are inevitably mainstream hits, that will be the inflection point where the old industry will significantly decline. Or that's what I think, anwyay.
I don't share your optimism, I think as long as there are truly great games being made and the developers earning well from them, the business people are going to be looking at them and saying "we could do that". What those studios lack in creativity or passion they more than make up for in marketing, sales, and sometimes manipulative money extraction game mechanics.
caseyy 8 hours ago [-]
It's not so much optimism as facts. Large AAA game companies have driven away investors[0] and talent[1]. The old growth engines (microtransactions, live service games, season passes, user-generated content, loot boxes, eSports hero shooters, etc.) also no longer work, as neither general players nor whales find them appealing.
AI is considered a potential future growth engine, as it cuts costs in art production, where the bulk of game production costs lie. Game executives are latching onto it hard because it's arguably one of the few straightforward ways to keep growing their publicly-traded companies and their own stock earnings. But technologists already know how this will end.
Other games industry leaders are betting on collapse and renewal to simpler business models, like self-funded value-first games. Also, many bet on less cashflow-intensive game production, including lower salaries (there is much to be said about that).
Looking at industry reports and business circle murmurs, this is the current state of gaming. Some consider it optimistic, and others (especially the business types without much creative talent) - dire. But it does seem to be the objective situation.
[0] VC investment has been down by more than 10x over the last two years, and many big Western game companies have lost investors' money in the previous five years. See Matthew Ball's report, which I linked in my parent comment, for more info.
> The old growth engines (microtransactions, live service games, season passes, user-generated content, loot boxes, eSports hero shooters, etc.) also no longer work, as neither general players nor whales find them appealing.
I just don't think that's true in a world where Marvel Rivals was the biggest launch of 2024. Live service games like Path of Exile, Counter-Strike, Genshin Impact, etc. make boatloads of money and have ever rising player counts.
The problem is that it's a very sink-or-swim market - if you manage to survive 2-3 years you will probably make it, but otherwise you are a very expensive flop. Not unlike VC-funded startups - just because some big names failed doesn't make investing into a unicorn any less attractive.
meheleventyone 4 minutes ago [-]
The issue is that there is no obvious driver for growth at the moment and the industry has seen pretty obscene growth over the twenty years I've been part of it. That's made VCs very gun shy, particularly as a lot of the companies they've funded have nose dived pretty spectacularly. It's no surprise that the two recent successes Helldivers 2 and Marvel Rivals both come from publisher funding and for the latter has a very strong IP licenced for it. All of this is definitely causing a dramatic impact on the number of content producing studios getting VC funding and publisher investment in to new live service titles.
Outside of live service everyone is also looking for that new growth driver. In my opinion the chances are though we're in for a longish period of stagnation. I don't even share the OPs rosey outlook towards more "grassroots" developers. Firstly because they're still businesses even with a big name attached. Secondly because there is going to be a bloodbath due to the large number of developers pivoting in that direction. It'll end up like the indie market where there are so many entrants success is extremely challenging to find.
kstrauser 19 hours ago [-]
There are many, many reasons to be skeptical of AI. There are also excellent tasks it can efficiently help with.
I wrote a project where I'd initially hardcoded a menu hierarchy into its Rust. I wanted to pull that out into a config file so it could be altered, localized, etc without users having it and recompile the source. I opened a “menu.yaml” file, typed the name of the top-level menu, paused for a moment to sip coffee, and Zed popped up a suggested completion of the file which was syntactically correct and perfect for use as-is.
I honestly expected I’d spend an hour mechanically translating Rust to YAML and debugging the mistakes. It actually took about 10 seconds.
It’s also been freaking brilliant for writing docstrings explaining what the code I just manually wrote does.
I don't want to use AI to write my code, any more than I'd want it to solve my crossword. I sure like having it help with the repetitive gruntwork and boilerplate.
ilrwbwrkhv 6 hours ago [-]
This sort of extremely narrow use case is what I think AI is good for but the problem is that if you have it for this one you will use it for other things and slowly atrophy.
crvdgc 11 hours ago [-]
A perspective from a friend, who recently gave up trying to get into concept art:
Before AI, there was out-sourcing. With mass-produced cheap works, foreign studios eliminated most junior positions.
Now AI is just taking this trend to its logical extreme: out-sourcing to machines, the ultimate form of out-sourcing. The cost approaches to 0 and the quantity approaches to infinity.
mattgreenrocks 20 hours ago [-]
Management: "devs aren't paid to play with shiny new tech, they should be shipping features!"
Also management: "I need you to play with AI and try to find a use for it"
ggm 9 hours ago [-]
I find introspecting about how I formulate the question and what works better or worse for me personally fascinating.
I am content to use the AI to perform "menial" tasks: I had a textfile in something parsable by field with some minor quirks (like right justified text) and was able to specify the field SEMANTICS in a way that made for a prompt to an ICS file calendar which just imported fine as-is. Getting a years forward planning from a texttual note in some structure into calendar -> import -> from-file was sweet. Do I need to train an AI to use a token/API key to do this directly? No. But thinking about how I say efficiently what fields are, and what the boundaries are, helps me understand my data.
BTW while I have looked at a ICS file and can see it is type:value, I have no idea of the types, or what specific GMT/Z format it wants for date/time, or the distinctions of meaning for confirmed/pending or the like. These are higher level constructs which seem to have made useful distinct behaviours in the calendar and the AI description of what it had done, and what I should expect lined up. I did not e.g. stipulate the mappings from semantic field to ICS type. I did say "this is a calendar date" and it did the rest.
I used AI to write a DJANGO web to do some trivial booking stuff. I did not expect the code to run as-is, but it did. Again, could I live with this product? Yes, but the extensibility worries me. Adding features, I am very conscious one wrong prompt and it can turn this into .. drek. It's fragile.
biophysboy 9 hours ago [-]
My method is this: before I use AI, I try to ask myself "how much should I surrender my judgment on this problem?"
Some problems are too big to surrender judgment. Some problems are solved differently depending on what you want to optimize. Sometimes you want to learn something. Sometimes there's ethics.
ggm 5 hours ago [-]
Nice. I think I agree, Size of the problem isn't same as "code complexity" or LOC or anything. If the consequences of the wrong solution being deployed are big enough even a 1 line fix can be a disaster.
I like surrender judgement. Its loss of locus of control. I also find myself asking if there are ways the AI systems "monetize" the nature of problems being put forward for solutions. I am probably implicitly giving up some IPR asking these questions, I could even be in breach of an NDA in some circumstances.
Some problems should not be put to an anonymous external service. I doubt the NSA wants people using claude or mistral or deepseek to solve NSA problems. Unless the goal, is to feed misinformation or mis-drection out into the world.
7 hours ago [-]
caseyy 16 hours ago [-]
There is a small, hopeful flipside to this. While people using AI to produce art (such as concept art) have flooded the market, real skills now command a higher price than before.
To pull this out of the games industry for just a moment, imagine this: you are a business and need a logo produced. Would you hire someone at the market price who uses AI to generate something... sort of on-brand they most definitely cannot provide indemnity cover for (considering how many of these dubiously owned works they produce), or would you pay above the market price to have an artist make a logo for you that is guaranteed to be their own work? The answer is clear - you'd cough up the premium. This is now happening on platforms like UpWork and Fiverr. The prices for real human work have not decreased; they have shot up significantly.
It's also happening slowly in games. The concept artists who are skilled command a higher salary than those who rely on AI. If you depend on image-generating AI to do your work, I don't think many game industry companies would hire you. Only the start-ups that lack experience in game production, perhaps. But that part of the industry has always existed - the one made of dreamy projects with no prospect of being produced. It's not worth paying much attention to, except if you're an investor. In which case, obviously it's a bad investment.
Besides, just as machine-translated game localization isn't accepted by any serious publisher (because it is awful and can cause real reputational damage), I doubt any evident AI art would be allowed into the final game. Every single piece of that will need to be produced by humans for the foreseeable future.
If AI truly can produce games or many of their components, these games will form the baseline quality of cheap game groups on the marketplaces, just like in the logo example above. The buyer will pay a premium for a quality, human product. Well, at least until AI can meaningfully surpass humans in creativity - the models we have now can only mimic and there isn't a clear way to make them surpass.
JohnMakin 14 hours ago [-]
> real skills now command a higher price than before.
Only if companies value/recognize those real skills over that of the alternative, and even if they do, companies are pretty notorious for choosing whatever is cheapest/easiest (or perceived to be).
7 hours ago [-]
gdulli 16 hours ago [-]
> There is a small, hopeful flipside to this. While people using AI to produce art (such as concept art) have flooded the market, real skills now command a higher price than before.
It's "hopeful" that the future of all culture will resemble food, where the majority have access to McDonalds type slop while the rich enjoy artisan culture?
caseyy 16 hours ago [-]
It's hopeful because AI has not devalued creative human labor but increased its worth. Similar to how if one were a skilled chef, they didn't start working for McDonald's when it came to be, but for a restaurant that pays significantly above McDonald's.
Most people's purchasing power being reduced is a separate matter, more related to the eroding middle class and greedflation. Many things can be said about it, but they are less related to the trend I highlighted. Even if, supposing the middle class erosion continues, the scenario you suggest may very well play out.
rchaud 20 hours ago [-]
> “I have no idea how he ended up as an art director when he can’t visualise what he wants in his head unless can see some end results”, Bradley says. Rather than beginning with sketches and ideas, then iterating on those to produce a more finalised image or vision, Bradley says his boss will just keep prompting an AI for images until he finds one he likes, and then the art team will have to backwards engineer the whole thing to make it work.
Sounds like an "idea guy" rather than an art director or designer. I would do this exact same thing, but on royalty-free image websites, trying to get the right background or explanatory graphic for my finance powerpoints. Unsurprisingly, Microsoft now has AI "generating" such images for you, but it's much slower than what I could do flipping through those image sites.
throwanem 10 hours ago [-]
Here's to the next decade of getting paid to clean up after "rockstars."
grg0 8 hours ago [-]
> “When I’m told 'Think of how much time you could be spending instead on making the actual game!', those who have drank the AI Kool-Aid don't understand that all this brainstorming and iteration is making the game, it’s a crucial everyday part of game development (and human interaction) and is not a problem to be solved.”
This right here is the key. It's that stench of arrogance of those who think others have a "problem" that needs fixing, and that they are in the best position to "solve" it despite having zero context or experience in that domain. It's like calling the plumber and thinking that you're going to teach them something about their job.
boh 8 hours ago [-]
This just sounds like cases of performative management. Very lazy implementation of what to them is just"productivity-future-tech" of the moment, so they can say "successfully transitioned into AI-driven development" on their CV's. AI is just software and it either fits your strategy or it doesn't. In the same way no company succeeds simply because it started using software, no company is going to succeed simply bcs they started to use AI.
KurSix 4 hours ago [-]
The saddest part is watching talented people, who care deeply about the craft, slowly burn out because their judgment is being replaced by a prompt
throwawayfgyb 20 hours ago [-]
I really like AI. It allows me to complete my $JOB tasks faster, so I have more time for my passion projects, that I craft lovingly and without crappy AI.
adrian_b 19 hours ago [-]
"AI" is just a trick to circumvent the copyright laws that are the main brake in writing quickly programs.
The "AI" generated code is just code extracted from various sources used for training, which could not be used by a human programmer because most likely they would have copyrights incompatible with the product for which "AI" is used.
All my life I could have written much faster any commercial software if I had been free to just copy and paste any random code lines coming from open-source libraries and applications, from proprietary programs written for former employers or from various programs written by myself as side projects with my own resources and in my own time, but whose copyrights I am not willing to donate to my current employer, so that I would no longer be able to use in the future my own programs.
I could search and find suitable source code for any current task as fast and with much greater reliability than by prompting an AI application. I am just not permitted to do that by the existing laws, unlike the AI companies.
Already many decades ago, it was claimed that the solution for enhancing programmer productivity is more "code reuse". However "code reuse" has never happened at the scale imagined in the distant past, but not because of technical reasons, but due to the copyright laws, whose purpose is exactly to prevent code reuse.
Now "AI" appears to be the magical solution that can provide "code reuse" at the scale dreamed a half of century ago, by escaping from the copyright constraints.
When writing a program for my personal use, I would never use an AI assistant, because it cannot accelerate my work in any way. For boilerplate code, I use various templates and very smart editor auto-completion, there is no need of any "AI" for that.
On the other hand, when writing a proprietary program, especially for some employer that has stupid copyright rules, e.g. not allowing the use of libraries with different copyrights, even when those copyrights are compatible with the requirements of the product, then I would not hesitate to prompt an AI assistant, in order to get code stripped of copyright, saving thus time over rewriting an equivalent code just for the purpose of enabling it to be copyrighted by the employer.
bflesch 11 hours ago [-]
This is an extremely important point, and first time I see it mentioned with regards to software copyright. Remember the days where companies got sued for including GPL'd code in their proprietary products?
popularonion 18 hours ago [-]
Not sure why this is downvoted. People forget or weren’t around for the early 2000s when companies were absolutely preoccupied with code copyright and terrified of lawsuits. That loosened up only slightly during the GitHub/StackOverflow era.
If you proposed something like GitHub Copilot to any company in 2020, the legal department would’ve nuked you from orbit. Now it’s ok because “everyone is doing it and we can’t be left behind”.
Edit: I just realized this was a driver for why whiteboard puzzles became so big - the ideal employee for MSFT/FB/Google etc was someone who could spit out library quality, copyright-unencumbered, “clean room” code without access to an internet connection. That is what companies had to optimize for.
int_19h 16 hours ago [-]
It's downvoted because it's plainly incorrect.
onemoresoop 6 hours ago [-]
What part is incorrect?
bluefirebrand 19 hours ago [-]
I have never had a job where completing tasks faster wound up with me having more personal free time. It always just means you move on to the next task more quickly
floriannn 18 hours ago [-]
This is a fair bit easier as a remote worker, but even in-office you would just sandbag your time rather than publishing the finished work immediately. In-office it's more likely that you would waste time on the internet rather than working on a personal project though.
dominicrose 18 hours ago [-]
That's not the worst thing. Having more work means you're less bored. You probably won't be payed more though. But being too productive can cause you to have no next task, wich isn't the same thing as having free time.
I think that's part of the reason why devs like working from home and not be spied on.
onemoresoop 6 hours ago [-]
You’re saying companies don’t get information on how remote employees utilize their time? I could almost be sure many companies do that.
esafak 19 hours ago [-]
Perhaps the OP completes the assigned task ahead of schedule and keeps the saved time.
htek 18 hours ago [-]
Shhh! Do you want to kill AI? All the C-suite and middle management need to hear is that "My QoL has never been better since I could use AI at work! Now I can 'quiet quit' half the day away! I can see my family after hours! Or even have a second job!"
onemoresoop 6 hours ago [-]
Expectations will go up, while the pay will stay the same. And many will just take it because of lack of alternatives
sksxihve 6 hours ago [-]
While sitting in the open office staring blankly into space because of RTO, work really has nothing to do with productivity it's all fugazi.
voidUpdate 18 hours ago [-]
I wish I had a job where if I completed all my work quickly, I was allowed to do whatever
ang_cire 11 hours ago [-]
How do they know if you're done, if you haven't "turned it in" yet? They're probably not watching your screen constantly.
My last boss told me essentially (paraphrasing), "I budget time for your tasks. If you finish late, I look like I underestimate time required, or you're not up to it. If you finish early, I look like I overestimate. If I give you a week to do something, I don't care if you finish in 5 minutes, don't give it to me until the week is up unless you want something else to do."
voidUpdate 2 hours ago [-]
My coworker/manager sits next to me
mitthrowaway2 6 hours ago [-]
Sounds like your last boss was working under some very twisted incentives.
onemoresoop 6 hours ago [-]
That is really not the norm nowadays.
sksxihve 6 hours ago [-]
Was it ever? Twenty years ago I had a boss that told me he cuts every estimate engineers give him in half and the work always gets completed on time, never mind the terrible quality and massive amount of bugs.
cschep 7 hours ago [-]
You can implement this yourself fairly easily.
some-guy 18 hours ago [-]
I always assumed game development would be one of the most impacted by AI hype, for better or worse. With game development there’s a much higher threshold for subjectivity and “incorrectness”.
I’m in a Fortune 500 software company and we are also being pushed AI down our throats, even though so far it has only been useful for small development tasks. However our tolerance for incorrectness is much, much lower—and many skip levels are already realizing this.
nathan_compton 17 hours ago [-]
I'm an indie game developer and its a domain where I find AI to be most useless - too much of what a game is interactive, spatial, and about game-feel. The AI just can't do it. Even GPT's latest models really struggled to write reasonable 3d transformations, which is unsurprising, since they live in text world, not 3d world.
BrenBarn 4 hours ago [-]
We can only hope this insane trend self-immolates before it causes too much collateral damage.
gwbas1c 15 hours ago [-]
I would think that, if AI-generated content is inferior, these games will fail in the marketplace.
So, where are the games with AI-generated content? Where are the reviews that praise or pan them?
(Remember, AI is a tool. Tools take time to learn, and sometimes, the tool isn't worth using.)
jim-jim-jim 9 hours ago [-]
> I would think that, if AI-generated content is inferior, these games will fail in the marketplace.
You'd hope so, but I'm not so sure. Media developments are not merely additive, at least with bean counters in charge. Certain formats absolutely eclipse others. It's increasingly hard to watch mainstream films with practical effects or animal actors. Even though most audiences would vastly prefer the real deal, they just put up with it.
It's easy to envision a similar future where the budget for art simply isn't there in most offerings, and we're left praising mediocre holdout auteurs for their simple adherence to traditional form (not naming names here).
bufferoverflow 18 hours ago [-]
I wish our company forced AI on us. Our security is so tight, it's pretty much impossible to use any good LLMs.
ang_cire 11 hours ago [-]
It really doesn't take that beefy of a machine to run a good LLM locally instead of paying some SaaS company to do it for you.
I've got a refurb homelab server off PCSP with 512gb ram for <$1k, and I run decently good LLM models (Deepseek-r1:70b, llama3.3:70b). Given your username, you might even try pitching a GPU server to them as dual-purpose; LLM + hashcat. :)
bufferoverflow 8 hours ago [-]
How would that help me? My work laptop doesn't have 512GB RAM, not even 10% of that.
yahoozoo 7 hours ago [-]
Let me preface this by saying that I wholeheartedly agree with the sentiment the article is trying to convey. That said, the “anonymity”, and the almost tropey at this point C*O characters make this read like a fan fiction.
ctrlp 10 hours ago [-]
I would hope that people with strong opinions about the uses and abuses of AI would start their own firms and hire people who are unwilling to use AI for whatever reasons. The competition should go a long way to proving the naysayer's points or disproving them. Personally, I think there is no evading the AI juggernaut and that artistic metrics or excellence metrics are going to take a back seat to pure shipping garbage faster metrics. The garbage will be come the new baseline of excellence and the former measures of excellence will be cottage industry artisanship with small and dedicated audiences.
As a small data point, I don't think AI can make movies worse than they currently are. And they are as bad as they are for commercial but non-AI reasons. But if the means to make movies using AI or scene-making tools build with a combo of AI and maybe game engine platforms puts the ability to make movies into the hands of more artistic people, the result may be more technologically uninteresting but nonetheless more artistically interesting because of narrative/character/storytelling vectors. Better quality for niche audiences. It's a low bar, but it's one possible silver lining.
fc417fc802 9 hours ago [-]
This is equivalent to suggesting (in a world without IP law) that the supporters of such a system ought to start competing firms that refuse to copy things without providing compensation in order that the competition might demonstrate the benefits of the system. Of course without the regulation neither of those systems is likely to be part of a viable business model.
That said I agree with your second paragraph. I think we will see an explosion of high quality niche products that would never have been remotely viable before this.
mistrial9 10 hours ago [-]
a similar inflection point happened in the history of film? because cheap prurient material was faster to produce and did generate commerce quickly.. much faster than say, an epic drama with 3000 extras, costumes and theme musics.. The Film Code in American did happen, was widely mocked, and probably was very responsible for the flourishing of an entire industry to the public for decades..
People left alone for a race to the bottom, does not end well, it seems..
failuser 10 hours ago [-]
Why won’t luddites open their own factories, amirite? They would if they had money.
dkobia 21 hours ago [-]
I've been wrestling with this tension between embracing AI tools and preserving human expertise in my work. On one hand, I have experienced real genuine productivity gains with LLMs - they help me code, organize thoughts and offer useful perspectives I hadn't even considered. On the other, I realize managers often don't understand the nature of creative work which is trivialized by all the content generation tools.
Creativity emerges through a messy exploration and human experience -- but it seems no one has time for that these days. Managers have found a shiny new tool to do more with less. Also, AI companies are deliberately targeting executives with promises of cost-cutting and efficiency. Someone has to pay for all the R&D.
3D30497420 20 hours ago [-]
I had very similar thoughts while reading through the article. I also have found some real value in LLMs, and when used well, I think can and will be quite beneficial.
Notably, a good number the examples were just straight-up bad management, irrespective of the tools being used. I also think some of these reactions are people realizing that they work for managers or in businesses that ultimately don't really care about the quality of their work, just that it delivers monetary value at the end.
gukov 11 hours ago [-]
Shopify CEO: "AI usage is now a baseline expectation"
> The whole game is resting on a prompt ‘what if a game was…’, but with no idea if that would be fun, or how to make it fun. It’s madness”.
lol I will point out that this has been an enormous problem in the game industry for long, long before generative AI existed.
Aeolun 9 hours ago [-]
Art team dislikes the technology that replaces them.
Am I the only one that thinks this is kind of a given regardless of the merits of the objection?
voidspark 31 minutes ago [-]
That's not the point of the article.
christkv 2 hours ago [-]
Are we going to get a steam label for handcrafted ?
000ooo000 19 hours ago [-]
Can't wait to hear the inevitable slurs people will create to refer to heavy AI users and staunch AI avoiders.
esafak 18 hours ago [-]
Prompt puncher and Luddite come to mind.
Schiendelman 16 hours ago [-]
"Sloppers" appeared in another thread in this post. I've seen it before, I think it'll stick.
jongjong 11 hours ago [-]
> In terms of software quality, I would say the code created by the AI was worse than code written by a human–though not drastically so–and was difficult to work with since most of it hadn’t been written by the people whose job it was to oversee it.
This is a key insight. The other insight is that devs spend most of their time reading and debugging code, not writing it. AI speeds up the writing of code but slows down debugging... AI was trained with buggy code because most code out there is buggy.
Also, when the codebase is complex and the AI cannot see all the dependencies, it performs a LOT worse because it just hallucinates the API calls... It has no idea what version of the API it is using.
TBH, I don't think there exists enough non-buggy code out there to train an AI to write good code which doesn't need to be debugged so much.
When AI is trained on normal language, averaging out all the patterns produces good results. This is because most humans are good at writing with that level of precision. Code is much more precise and the average human is not good at it. So AI was trained on low-quality data there.
The good news for skilled developers is that there probably isn't enough high quality code in the public domain to solve that problem... And there is no incentive for skilled developers to open source their code.
voidhorse 20 hours ago [-]
I think the software industry will look just like the material goods space post-industrialization after the dust settles:
Large corporations will use AI to deliver low-quality software at high speed and high scale.
"Artisan" developers will continue to exist, but in much smaller numbers and they will mostly make a living by producing refined, high-quality custom software at a premium or on creative marketplaces. Think Etsy for software.
That's the world we are heading for, unless/until companies decide LLMs are ultimately not cost beneficial or overzealous use of them leads to a real hallucination induced catastrophe.
GarnetFloride 18 hours ago [-]
Sounds like fast fashion. The thinnest, cheapest fabric, slapped together as fast as possible with the least amount of stitching. Shipped fast and obsolete fast.
tmpz22 18 hours ago [-]
Fast fashion - also ruinous to the environment.
grg0 8 hours ago [-]
Bradley's game is DOA. Is it ARK: Aquatica by any chance?
13 hours ago [-]
more_corn 16 hours ago [-]
Everyone I know uses it to some degree. Simply having a smart debugger does wonders. You don’t have to give up control, it can help you stay in flow state.
Or it can constantly irritate you if you fight it.
matt3210 18 hours ago [-]
One thing jumps out about the person who noticed the AI was wrong on things they were familiar with. It's like when ELon Musk talks about rockets. I don't know about rockets so I take his word for it. When Elon Must talked about software it was obvious he has no idea what he's doing. So when the AI generates something I know nothing about, it looks productive but when it's generating things for which I'm familiar I know its full of shit.
bluefirebrand 18 hours ago [-]
> So when the AI generates something I know nothing about, it looks productive but when it's generating things for which I'm familiar I know its full of shit.
This is why when you hear people talk about how great it is at producing X, our takeaway should be "this person is not an expert at X, and their opinions can be disregarded"
They are telling on themselves that they are not experts at the thing they think the AI is doing a great job at
andybak 17 hours ago [-]
"This is why when you hear people talk about how terrible it is at producing X, our takeaway should be "this person either hasn't tried to use it in good faith, and their opinions can be disregarded"
I'm playing devil's advocate somewhat here but it often seem like that there's a bunch of people on both sides using hella motivated reasoning because they have very strong feelings that developed early on in their exposure to AI.
AI is both terrible and wonderful. It's useless and some things and impressive at others. It will ruin whole sectors of the economy and upturn lives. It will get better and it is getting better so any limitations you currently observe are probably termporary. The net benefit for humanity may turn out to be positive or negative - it's too early to tell.
bluefirebrand 17 hours ago [-]
> AI is both terrible and wonderful. It's useless and some things and impressive at others
That's kind of my problem. I am saying that it mostly only appears impressive to people who don't know better
When people do know better it comes up short consistently
Most of the pro AI people I see are bullish about it on things they have no idea about, like non-technical CEOs insisting that it can create good code
andybak 16 hours ago [-]
> When people do know better it comes up short consistently
I disagree with that part and I don't think this opinion can be sustained by anyone using it with any regularity in good faith
People can argue whether it's 70/30 or 30/70 or what domains it's more useful in than others but you are overstating the negative.
int_19h 15 hours ago [-]
Have you considered that it's actually impressive in some areas that are outside of your interest or concern?
bluefirebrand 15 hours ago [-]
Could be, but why would I trust that when it's clearly so bad at the things I am good at?
ang_cire 11 hours ago [-]
> The net benefit for humanity may turn out to be positive or negative - it's too early to tell.
It's just a tool, but it is unfortunately a tool that is currently dominated by large-sized corporations, to serve Capitalism. So it's definitely going to be a net-negative.
Contrast that to something like 3D printing, which has most visibly benefited small companies and individual users.
Software developers are so aware of "enshittification" and yet also bullish about this generation of AI, it's baffling.
It's very clear the "value" of the LLM generation is to churn out low-cost, low-quality garbage. We already outsourced stuff to Fivrr but now we can cut people out altogether. Producing "content" nobody wants.
woah 18 hours ago [-]
Why so much hand-wringing? If you are an anti-AI developer and you are able to develop better code faster than someone using AI, good for you. If AI-using developers will end up ruining their codebase in months like many here are saying, then things will take care of themselves.
svantana 18 hours ago [-]
I see two main problems with this approach:
1. productivity and quality is hard to measure
2. the codebase they are ruining is the same one I am working on.
munksbeer 52 minutes ago [-]
> 2. the codebase they are ruining is the same one I am working on.
We're supposed to have a process for dealing with this already, because developers can ruin a codebase without ai.
MathMonkeyMan 20 minutes ago [-]
See point 1.
bluefirebrand 18 hours ago [-]
Faster is not a smart metric to judge a programmer by.
"more code faster" is not a good thing, it has never been a good thing
I'm not worried about pro AI workers ruining their codebases at their jobs
I'm worried about pro AI coworkers ruining my job by shitting up the codebases I have to work in
woah 18 hours ago [-]
I said "better code faster". Delivering features to users is always a good thing, and in fact is the entire point of what we do.
bluefirebrand 17 hours ago [-]
> in fact is the entire point of what we do
Pump the brakes there
You may have bought into some PMs idea of what we do, but I'm not buying it
As professional, employed software
developers, the entire point of what we do is to provide value to our employers.
That isn't always by delivering features to users, it's certainly not always by delivering features faster
AlexandrB 15 hours ago [-]
A lot of modern software dev is focused on delivering features to shareholders, not users. Doing that faster is going to make my life, as a user, worse.
joe_the_user 16 hours ago [-]
Even if you say "better faster" tens times fast, the quality of being produced fast and being broadly good are very different. Speed of development can be measured immediately. Quality is holistic. It's a product of not just formatting clear structures but of relating to the rest of a given system.
owebmaster 6 hours ago [-]
Most of the times I get to the real solution for a problem after working in the wrong one for a while. If/when LLM help me finish the wrong one faster it is not helpful and could even be damaging in a situation that it goes to production fast.
nathan_compton 17 hours ago [-]
I've posted recently about a dichotomy which I have had in my head for years as a technical person: there are two kinds of tools; the first lets you do the right thing more easily and the second lets you do the wrong thing more quickly and for longer before you have to pay for it. AI/LLMs can definitely be the latter kind of tool, especially in a context where short term incentives swamp long term ones.
int_19h 15 hours ago [-]
I'm actually pro-AI and I use AI assistants for coding, but I'm also very concerned that the way those things will be deployed at scale in practice is likely to lead to severe degradation of software quality across the board.
Why the hand-wringing? Well, for one thing, as a developer I still have to work on that code, fix the bugs in it, maintain it etc. You could say that this is a positive since AI slop would provide for endless job security for people who know how to clean up after it - and it's true, it does, but it's a very tedious and boring job.
But I'm not just a developer, either - I'm also a user, and thinking about how low the average software quality already is today, the prospect of it getting even worse across the board is very unpleasant.
And as for things taking care of themselves, I don't think they will. So long as companies can still ship something, it's "good enough", and cost-cutting will justify everything else. That's just how our economy works these days.
ang_cire 11 hours ago [-]
This assumes a level of both rationality and omniscience that don't exist in the real world.
If a company fails to compete in the market and dies, there is no "autopsy" that goes in and realizes that it failed because of a chain-reaction of factors stemming from bad AI-slop code. And execs are so far removed from the code level, they don't know either, and their next company will do the same thing.
What you're likely to end up with is project managers and developers who do know the AI code sucks, and they'll be heeded by execs just as much they are now, which is to say not at all.
And when the bad AI-code-using devs apply to the next business whose execs are pro-AI because they're clueless, guess who they'll hire?
specialist 15 hours ago [-]
Each example's endeavor is the production of culture. The least interesting use case for "AI".
Real wealth creation will come from other domains. These new tools (big data, ML, LLMs, etc) unlock the ability to tackle entirely new problems.
But as a fad, "AI" is pretty good for separating investors from their money.
It's also great for further beating down wages.
bitwize 3 hours ago [-]
I like this article. It opens with a statement of its thesis and presents a few profiles of video game workers whose lives have been negatively impacted by AI. Each profile follows a basic template: a paragraph or so about who they are and what they do, a summary of how AI entered their workplace, a bunch of interview quotes about their reaction to it, and a paragraph at the end about what the final outcome was for the person or their company. Succinct, to the point, and easy to read. You don't see a lot of online journalism like this; the clickbait era has been marked by an entire novel about the E! True Hollywood Story of the major player(s) before the fucking point is even mentioned -- or worse still, AI-generated slop as the body text. Props to Aftermath and Luke Plunkett for maintaining a high standard of prose.
AlienRobot 18 hours ago [-]
A very bad programmer can program some cool stuff with the help of libraries, toolkits, frameworks and engines that they barely understand. I think that's pretty cool and makes things otherwise impossible possible, but it doesn't make the very bad programmer better than they really are.
I believe AI is a variation of this, except a library at least has a license.
matt3210 18 hours ago [-]
The AI code has thousands of licenses but the legal system hasn't caught up
DeathArrow 18 hours ago [-]
If a manager thinks paying $20 monthly for an AI tool will make a developer or artist 5x more productive, he's delusional.
On the other hand, AI can be useful and can accelerate a bit some work.
18 hours ago [-]
ihsw 9 hours ago [-]
[dead]
john_texas 9 hours ago [-]
[dead]
Jyaif 19 hours ago [-]
“He doesn't know that the important thing isn't just the end result, it's the journey and the questions you answer along the way”
This is satire right?
grg0 8 hours ago [-]
Many of the best games were discovered through an iterative process of trial and error, not through magic divination. So, yes, it is the journey along the way that matters in this kind of creative process. This applies not just to concept art, but game mechanics and virtually every element of the game.
akomtu 15 hours ago [-]
Corporations don't need human workers, they need machines, the proverbial cogs that lack their own will and implement the will of the corporation instead. AI will make it happen: human workers will be managed by AI with sub-second precision and kill whatever little creativity and humanity the workers still had.
lanfeust6 20 hours ago [-]
It would be an understatement to call this a skewed perspective. In most of the anecdotes they seem to try really hard to trivialize the productive benefits of AI, which is difficult to take seriously. The case that LLMs create flawed outputs or are limited in what they can do is not controversial at all, but by and large, reports by experienced developers is that it has improved their productivity, and it's now part of their workflow. Whether businesses and hire-ups try to use it in absurd ways is neither here nor there. That, and culture issues, were a problem before AI.
Obviously some workers have a strong incentive to oppose adoption, because it may jeopardize their careers. Even if the capabilities are over-stated it can be a self-fulfilling prophecy as higher-ups choices may go. Union shops will try to stall it, but it's here to stay. You're in a globally competitive market.
Fraterkes 19 hours ago [-]
If ai exacerbates culture issues and management incompetence then that is an inherent downside of ai.
There is a bunch of programmers who like ai, but as the article shows, programmers are not the only people subjected to ai in the workplace. If you're an artist, you've taken a job that has crap pay and stability for the amount of training you put in, and the only reason you do it is because you like the actual content of the job (physically making art). There is obviously no upside to ai for those people, and this focus on the managers' or developers' perspective is myopic.
andybak 17 hours ago [-]
It might seem hard to believe but there are a bunch of artists who also like AI. People whose artistic practice predates AI. The definition of "artist" is a quagmire which I won't get into but I am not stretching the definition here in any way.
Fraterkes 17 hours ago [-]
I'm sure there are a bunch! I'm an artist, I talk to a bunch of artists physically and online. It's not the prevailing opinion in my experience.
andybak 17 hours ago [-]
Agreed. But it's important to counter the impression that many have that it's nearly unanimous.
lanfeust6 19 hours ago [-]
It's an interesting point that passion-jobs that creatives take on (including game dev) tend to get paid less, and where the thrilling component is disrupted there could be less incentive to bother entering the field.
I think for the most part creatives will still line up for these gigs, because they care about contributing to the end products, not the amount of time they spend using Blender.
Fraterkes 18 hours ago [-]
You are again just thinking from the perspective of a manager: Yes, if these ai jobs need to be filled, artists will be the people filling them. But from the artists perspective there are fewer jobs, and the jobs that do remain are less fulfilling. So: from the perspective of a large part of the workforce it is completely true and rational to say that ai at their job has mostly downsides.
lanfeust6 18 hours ago [-]
> from the artists perspective there are fewer jobs, and the jobs that do remain are less fulfilling.
Re-read what I wrote. You repeated what I said.
> So: from the perspective of a large part of the workforce it is completely true and rational to say that ai at their job has mostly downsides.
For them, maybe.
Fraterkes 17 hours ago [-]
Alright, so doesn't that validate a lot of the feelings and opinions layed out in the OP? Have I broadened your worldview?
oneeyedpigeon 19 hours ago [-]
I have very little objection to AI, providing we get UBI to mitigate the fallout.
parpfish 19 hours ago [-]
I was thinking about this and realized that if we want an AI boom to lead to UBI, AI needs to start replacing the cushy white collar jobs first.
If you start by replacing menial labor, there will be more unemployment but you’re not going to build the political will to do anything because those jobs were seen as “less than” and the political class will talk about how good and efficient it is that these jobs are gone.
You need to start by automating away “good jobs” that directly affect middle/upper class people. Jobs where people have extensive training and/or a “calling” to the field. Once lawyers, software engineers, doctors, executives, etc get smacked with widespread unemployment, the political class will take UBI much more seriously.
stuckinhell 17 hours ago [-]
I suspect elites will build a two-tiered AI system where only a select few get access to the cutting-edge stuff, while the rest of us get stuck with the leftovers.
They'll use their clout—money, lobbying, and media influence—to lock in their advantage and keep decision-making within their circle.
In the end, this setup would just widen the gap, cementing power imbalances as AI continues to reshape everything. UBI will become the bare minimum to keep the masses sedated.
waveringana 19 hours ago [-]
needing a lawyer and needing a doctor are very common cases of bankruptcy in the US. both feel very primed to be replaced by models
lanfeust6 19 hours ago [-]
Incidentally it seems to be happening in that order, but laborers won't have a long respite (if you can call it that)
parpfish 18 hours ago [-]
i think that the factor determining which jobs get usurped by AI first isn't going to be based on the cognitive difficulty as much as it is about robotic difficulty and interaction with the physical world.
if you job consists of reading from a computer -> thinking -> entering things back into a computer, you're on the top of the list because you don't need to set up a bunch of new sensors and actuators. In other words… the easier it is to do your job remotely, the more likely it is you’ll get automated away
hello_computer 19 hours ago [-]
But why will that happen? If they have robots and AI and all the money, what’s stopping the powers that be from disposing of the excess biomass?
lanfeust6 19 hours ago [-]
What's there to gain? What do they care about biomass? They're still in the business of selling products, until the economy explodes. I find this to be circular because you could say the same thing about right now, "why don't they dispose of the welfare class" etc.
There's also the fact that "they" aren't all one and the same persons with the exact same worldview and interests.
achierius 9 hours ago [-]
You speak like they would have to do something 'aggressive'. If you can achieve a circular economy, where robots produce products for the benefit of a lucky few who can live off of their investments (in the robots), then the rest of the population will 'naturally' go away.
You might say "but why not use just 1% of that GDP on making sure the rest of humanity lives in at least minimal comfort"? But clearly -- we already choose not to do that today. 1% of the GDP of the developed world would be more than enough to solve many horrifying problems in the developing world -- what we actually give is a far smaller fraction, and ultimately not enough.
hello_computer 18 hours ago [-]
The Davos class was highly concerned about ecology before Davos was even a thing. In America, their minions (the “coastie” class) are coming to see the liquidation of the kulaks as perhaps not such a bad thing. If it devolves into a “let them eat cake” scenario, one has to wonder how things will play out in “proles vs robot pinkertons”. Watch what the sonic crowd control trucks did in Serbia last week.
Of course there is always the issue of “demand”—of keeping the factories humming, but when you are worth billions, your immediate subordinates are worth hundreds of millions, and all of their subordinates are worth a few million, maybe you come to a point where “lebensraum” becomes more valuable to you than another zero at the end of your balance?
When AI replaces the nerds (in progress), they become excess biomass. Not talking about a retarded hollywood-style apocalypse. Economic uncertainty is more than enough to suppress breeding in many populations. “not with a bang, but a whimper”
If you know any of “them”, you will know that “they” went to the same elite prep schools, live in the same cities, intermarry, etc. The “equality” nonsense is just a lie to numb the proles. In 2025 we have a full-blown hereditary nobility.
edit: answer to ianfeust6:
The West is not The World. There are over a billion Chinese, Indians, Africans…
Words mean things. Who said tree hugger? If you are an apex predator living in an increasingly cloudy tank, there is an obvious solution to the cloudyness.
lanfeust6 17 hours ago [-]
So your take is that the wealthiest class will purge people because they're tree-huggers. Not the worst galaxy-brained thing I've heard before, but still laughable.
Don't forget fertility rate is basically stagnant in the West and falling globally, so this seems like a waste of time considering most people just won't breed at all.
hello_computer 13 hours ago [-]
repeated for thread continuity:
The West is not The World. There are over a billion Chinese, Indians, Africans…
Words mean things. Who said tree hugger? If you are an apex predator living in an increasingly cloudy tank, there is an obvious solution to the cloudyness.
lanfeust6 15 hours ago [-]
also: emissions will continue to drop
hello_computer 14 hours ago [-]
there has been far more degradation to the natural environment than mere air pollution. general sherman decimated the plains indians with a memorandum. do you think that you are sufficiently better and sufficiently more indispensable than a plains indian?
lanfeust6 19 hours ago [-]
Right, well even without AGI (no two people agree on whether it's coming within 5 years, 30, or 100), finely-tuned LLMs can disrupt the economy fast if the bottlenecks get taken care of. The big one is the robot-economy. This is popularly placed further off in timescales, but it does not require AGI at all. We already have humanoid robots on the market for the price of a small car, they're just dumb. Once we scale up solar and battery production, and then manufacturing, it's coming for menial labor jobs. They already have all the pieces, it's a foregone conclusion. What we don't know how to do is to create a real "intelligence", and here the evangelists will wax about the algorithms and the nature of intelligence, but at the end of the day it takes more than scaling up an LLM to constitute an AGI. The bet is that AI-assisted research will lead to breakthrough in a trivial amount of time.
With white-collar jobs the threat of AI feels more abstract and localized, and you still get talk about "creating new jobs", but when robots start coming off the assembly line people will demand UBI so fast it will make your head spin. Either that or they'll try to set fire to them or block them with unions, etc. Hard to say when because another effort like the CHIPS act could expedite things.
dingnuts 19 hours ago [-]
Humanoid robots on the market for the price of a small car? That's complete science fiction. There have been demos of such robots but only demos.
lanfeust6 19 hours ago [-]
> Humanoid robots on the market for the price of a small car? That's complete science fiction.
Do they do anything or are they an expensive toy in the shape of a humanoid robot?
hello_computer 19 hours ago [-]
It’s karma. The creatives weren’t terribly concerned when the factory guys lost their jobs. “Lern to code!” Now it’s our turn to “Learn to OnlyFans” or “Learn to Homeless”
Terr_ 16 hours ago [-]
> The creatives [...] “Lern to code!”
No, the underlying format of "$LABOR_ISSUE can be solved by $CHANGE_JOB" comes from a place of politics, where a politician is trying to suggest they have a plan to somehow tackle a painful problem among their constituents, and that therefore they should be (re-)elected.
Then the politicians piled onto "coal-miners can learn to code" etc. because it was uniquely attractive, since:
1. No big capital expenditures, so they don't need to promise/explain how a new factory will get built.
2. The potential for remote work means constituents wouldn't need to sell their homes or move.
3. Participants wouldn't require multiple years of expensive formal schooling.
4. It had some "more money than you make now" appeal.
hello_computer 14 hours ago [-]
Stating it in patronizing fact-checker tone does not make it true. The tech nerds started it (they love cheap labor pools). Then the politicians joined their masters’ bandwagon. It was a PR blitz. Who has the money for those? Dorseys, Grahams, & Zuckerbergs, or petty-millionaire mayors & congressmen? Politicians are just the house slaves—servants of money.
"Tech nerds" like Dorsey and Zuckerberg have almost nothing in common (on a day-to-day basis, with how they live their lives, their material incentives, etc.) with "tech nerds" like "Intel Employee #783,529". Those are not a single class of people, and it was predominantly the first group that pushed this sort of rhetoric, not the latter.
hello_computer 7 hours ago [-]
24 The disciple is not above his master, nor the servant above his lord.
25 It is enough for the disciple that he be as his master, and the servant as his lord. If they have called the master of the house Beelzebub, how much more shall they call them of his household?
Terr_ 6 hours ago [-]
You whine about a "patronizing fact-checker tone", yet when someone points out a real difference between groups, you flee and sling Bible verses?
Forget these new taxes on Americans who buy Canadian hardwood, we can just supply logs from your eyes.
hello_computer 5 hours ago [-]
It's common sense. That is why it has endured. You people are like mob hitmen standing in moral judgment of your Godfathers. Without your muscle, your Godfather is just an old guy with pasta and a cigar. The "difference" is something you hallucinate so you can feel good about yourselves.
how much more shall they call them of his household?
Fraterkes 19 hours ago [-]
"learn to code" was thrown around by programmers, not creatives. Everyone else (including writers and artists) has long hated that phrase, and condemded it as stupid and shortsighted.
hello_computer 19 hours ago [-]
“learn to code” was from the media. whether they deserve to be classified as “creatives” i will leave to the philosophers.
10 hours ago [-]
DadBase 11 hours ago [-]
I’ve been doing “vibe coding” since Borland C++. We used to align the mood of the program with ambient ANSI art in the comments. if the compiler crashed, that meant the tone was off.
Animats 11 hours ago [-]
AI-generated art just keeps getting better. This looks like a losing battle.
gazebo64 10 hours ago [-]
I think the most salient point the artists make in the article is that the process of ideating and iterating on art is just as valuable, if not moreso, than the end result. You can get a good looking image from an AI generator but miss out on the ideas and discoveries you would otherwise make by actually working on that art.
I think it's also unfortunate how the advocates for AI replacing artists in gamedev clearly think of art as a chore or a barrier to launch rather than being the whole point of what they're making. If games are art, then it stands to reason the.. art.. that goes into it is just as important as anything else. A game isn't defined just by the logic of the main loop.
aucisson_masque 11 hours ago [-]
A.i. is a blatant case of darwinism.
There are those who adapt, those who will keep moaning about it and finally those who believe it can do everything.
First one will succeed, second one will be replaced, third one is going to get hurt.
I believe this article and the people it mentions are mostly from the second category. Yet no one with all his mind can deny that ai makes writing code faster, not necessarily better but faster, and games at the end are mostly codes.
Of course ai is going to get pushed hard by your ceo, he knows that if he doesn't, another competitor who use it will be able to produce more games, faster and less expensive.
gazebo64 10 hours ago [-]
>Yet no one with all his mind can deny that ai makes writing code faster, not necessarily better but faster, and games at the end are mostly codes.
It's actually quite easy, and not uncommon, to deny all of those things. Game code is complex and massively interwoven and relying on generative code that you didn't write and don't fully understand will certainly break down as game systems increase in complexity, and you will struggle to maintain it or make effective modifications -- so ignoring the fact that the quality is lower, there's an argument to be made that it will be "slower" to write in the long term.
I think it's also flat wrong to say games are "mostly codes" -- games are a visual medium and people remember the visual/audio experience they had playing a game. Textures, animations, models, effects, lighting, etc. all define what a game is just as much if not more than the actual gameplay systems. Art is the main differentiating factor between games when you consider that most gameplay systems are derivative of one another.
grg0 8 hours ago [-]
And then there is a fourth category: those who preach things they have no idea about.
voidspark 29 minutes ago [-]
You didn't read the article.
ohgr 11 hours ago [-]
So on that basis you think the market is happy with shit things made very fast?
I can assure you it's not. And people are starting to realise that there is a lot of shit. And know that LLMs generate it.
ang_cire 11 hours ago [-]
> another competitor who use it will be able to produce more games, faster and less expensive
And yet this is no guarantee they will succeed. In fact, the largest franchises and games tend to be the ones that take their time and build for quality. There are a thousand GTA knock-offs on Steam, but it's R* that rakes in the money.
indoordin0saur 16 hours ago [-]
This article is an example of why the gender-neutral use of pronouns makes things a pain to read. If you're already changing the interviewees' names then IDK why you couldn't just pick an arbitrary he/she pronoun to stick to for one character.
> Francis says their understanding of the AI-pusher’s outlook is that they see the entire game-making process as a problem, one that AI tech companies alone think they can solve. This is a sentiment they do not agree with.
gwbas1c 15 hours ago [-]
"they" was a gender-neutral pronoun when I was in school in the 1990s.
indoordin0saur 14 hours ago [-]
It has been considered normal in some colloquial uses for a long time. But until the late 2010s/early 2020s all style guides considered it to be poor form due to the ambiguity and muddy sentence structure it creates. Recommendations were changed recently for political reasons.
xzsinu 5 hours ago [-]
Maybe recommendations changed recently because it has been considered normal in colloquial use for a long time.
spacecadet 11 hours ago [-]
Shit changes. You can either let it roll off you or over you. Alot less painful rolling off.
ryoshoe 15 hours ago [-]
Singular they was used by respected authors even as far back as the 19th century.
add-sub-mul-div 16 hours ago [-]
There's nothing painful about this to anyone who hasn't been conscripted into the culture wars.
indoordin0saur 14 hours ago [-]
But it was the culture war that resulted in this change to the language. Previous to the war, singular 'they' was to be avoided due to the ambiguity it introduces.
spacecadet 11 hours ago [-]
What ambiguity? We know it's a human, the human has a name. We do not know their gender or sex, both are not relevant. They works perfectly.
This seems like a you problem...
add-sub-mul-div 10 hours ago [-]
It's not a culture war when attitudes towards gender evolve, just like it wasn't a culture war that some people are gay.
It's not a culture war until there's two sides, until a segment of the population throws a hissyfit because new ideas make them uncomfortable.
gwbas1c 17 hours ago [-]
> “He doesn't know that the important thing isn't just the end result, it's the journey and the questions you answer along the way”. Bradley says that the studio’s management have become so enamoured with the technology that without a reliance on AI-generated imagery for presentations and pitches they would not be at the stage they are now, which is dealing with publishers and investors.
Take out the word AI and replace it with any other tool that's over-hyped or over-used, and the above statement will apply to any organization.
nilkn 20 hours ago [-]
I don't have much sympathy for this. This country has long expected millions and millions of blue collar workers to accept and embrace change or lose their careers and retirements. When those people resisted, they were left to rot. Now I'm reading a sob story about someone throwing a fit because they refuse to learn to use ChatGPT and Claude and the CEO had to sit them down and hold their hand in a way. Out of all the skillset transitions that history has required or imposed, this is one of the easiest ever.
They weren't fired; they weren't laid off; they weren't reassigned or demoted; they got attention and assistance from the CEO and guidance on what they needed to do to change and adapt while keeping their job and paycheck at the same time, with otherwise no disruption to their life at all for now.
Prosperity and wealth do not come for free. You are not owed anything. The world is not going to give you special treatment or handle you with care because you view yourself as an artisan. Those are rewards for people who keep up, not for those who resist change. It's always been that way. Just because you've so far been on the receiving end of prosperity doesn't mean you're owed that kind of easy life forever. Nobody else gets that kind of guarantee -- why should you?
The bottom line is the people in this article will be learning new skills one way or another. The only question is whether those are skills that adapt their existing career for an evolving world or whether those are skills that enable them to transition completely out of development and into a different sector entirely.
raxxorraxor 26 minutes ago [-]
Having spend hours upon hours with image snythesis for artistic hobby purposes, it is indeed an awesome tool. If you get into it you might learn about its limitations though.
Real knowledge here is often absend from the strongest AI prosletisers, others are more realistic about it. It still remains an awesome tool, but a limited one.
AIs today are not creative at all. They find statistical matches. They perform a different work than artists do.
But please, replace all your artwork with AI generated ones. I believe the forced "adapt" phase with that approach would realize itself rather quickly.
voidspark 28 minutes ago [-]
You obviously didn't read the article. So many people here are replying before reading it.
kmeisthax 17 hours ago [-]
If you're not doing the work, you're not learning from the result.
The CEOs in question bought what they believed to be a power tool, but got what is more like a smarter copy machine. To be clear, copy machines are not useless, but they also aren't going to drive the 200% increases in productivity that people think they will.
But because management demands the 200% increase in productivity they were promised by the AI tools, all the artists and programmers on the team hear "stop doing anything interesting or novel, just copy what already exists". To be blunt, that's not the shit they signed up for, and it's going to result in a far worse product. Nobody wants slop.
petesergeant 20 hours ago [-]
> These are rewards for people who keep up, not for those who resist change.
lol. I work with LLM outputs all day -- like it's my job to make the LLM do things -- and I probably speak to some LLM to answer a question for me between 10 and 100 times a day. They're kinda helpful for some programming tasks, but pretty bad at others. Any company that tried to mandate me to use an LLM would get kicked to the curb. That's not because I'm "not keeping up", it's because they're simply not good enough to put more work through.
ewzimm 20 hours ago [-]
Wouldn't this depend a lot on how management responds to your use? For example, if you just kept a log of prompts and outputs with notes about why the output wasn't acceptable, that could be considered productive use in this early stage of LLMs, especially if management's goal was to have you learning how to use LLMs. Learning how not to use something is just as important in the process of adapting any new tool.
If management is convinced of the benefits of LLMs and the workers are all just refusing to use them, the main problem seems to be a dysfunctional working environment. It's ultimately management's responsibility to work that out, but if the management isn't completely incompetent, people tasked with using them could do a lot to help the situation by testing and providing constructive feedback rather than making a stand by refusing to try and providing grand narratives about damaging the artistic integrity of something that has been commoditized from inception like video game art. I'm not saying that video game art can't be art, but it has existed in a commercial crunch culture since the 1970s.
achierius 9 hours ago [-]
What sort of tasks have you seen them struggle with? Not to dispute, just collecting datapoints for my own sake.
petesergeant 5 hours ago [-]
Anything with even vaguely complicated TypeScript types, hallucinating modules, writing tests that are useful rather than just performative, as recent examples…
EigenLord 8 hours ago [-]
I can see why some fields would have an overwhelmingly negative reaction to AI but I simply can't grasp why some software devs are. The entire point of the field is to get computers to do stuff for you. I've been doing this s*it for 10 years, there's too many little details and commands to remember and too much brutally dull work to not automate it.
I also have come to realize that in software development, coding is secondary to logical thinking. Logical thinking is the primary medium of every program, the language is just a means to express it. I may have not memorized as many languages as AI, but I can think better than it logically. It helps me execute my tasks better.
Also, I've been able to do all kinds of crazy and fun experiments thanks to genAI. Knowing myself I know realistically I will never learn LISP, and will always retain just an academic interest in it. But with AI I can explore these languages and other areas of programming beyond my expertise and experience much more effectively than ever before. Something about the interactive chat interface keeps my attention and allows me to go way deeper than textbooks or other static resources.
I do think in many ways it's a skill issue. People conceptualize genAI as a negation of skills, an offloading of skill to the AI, but in actuality grokking these things and learning how to work with them is its own skill. Of course managers just forcing it on people will elicit a bad reaction.
layer8 4 hours ago [-]
As long as one has to double-check and verify every single output, I don’t think that “automation” is the right word. Every LLM use is effectively a one-off and cannot be repeated blindly.
namaria 5 minutes ago [-]
Undefined behavior as a service is truly a bizarre proposition to my ears. Layering undefined behavior (agents) and gaming undefined behavior in hopes it comes out as you need (prompting) sounds insane and sometimes I have to wonder if I am the insane one. Very weird times.
000ooo000 8 hours ago [-]
You start by saying it's logical thinking that is a SE's value and then close by suggesting learning how to offload that logical thinking to AI is a 'skill'. Bizarre.
yahoozoo 6 hours ago [-]
> I've been doing this s*it for 10 years, there's too many little details and commands to remember and too much brutally dull work to not automate it.
git gud
Rendered at 09:22:03 GMT+0000 (Coordinated Universal Time) with Vercel.
Prior hype, like block chain are more abstract, therefore less useful to people who understand managing but not the actual work.
Because a core feature of LLMs is to minimize the distance between {quality answers} and {gibberish that looks correct}.
As a consequence, this maximizes {skill required to distinguish the two}.
Are we then surprised that non-subject matter experts overestimate the output's median usefulness?
> Is because unlike prior hype cycles, this one is super easy for an MBA to point at and sort of see a way to integrate it.
This particular hype is the easiest one thus far for an MBA to understand because employing it is the closest thing to a Ford assembly line[0] the software industry has made available yet.
Since the majority of management training centers on early 20th century manufacturing concepts, people taught same believe "increasing production output" is a resource problem, not an understanding problem. Hence the allure of "generative AI can cut delivery times without increasing labor costs."
0 - https://en.wikipedia.org/wiki/Assembly_line
I've seen this with other tools before. Every single time, it's because someone in the company signed a big contract to get seats, and they want to be able to show great utilization numbers to justify the expense.
AI has the added benefit of being the currently in-vogue buzzword, and any and every grant or investment sounds way better with it than without, even if it adds absolutely nothing whatsoever.
Or maybe they can just use the AI to write creative emails to management explaining why they weren’t able to use AI in their work this day/week/quarter.
Possibly this is just among the smallish group of students I know at MIT, but I would be surprised to hear that a biomedical researcher has no use for them.
I do have to say that we're just approaching the tip of the iceberg and there are huge issues related to standardization, dirty datas... We still need the supervision and the help of one of the two professors to proceed even with llms
There is no place for me in this environment. I’d not that I couldn’t use the tools to make so much code, it’s that AI use makes the metric for success speed-to-production. The solution to bad code is more code. AI will never produce a deletion. Publish or perish has come for us and it’s sad. It makes me feel old just like my Python programming made the mainframe people feel old. I wonder what will make the AI developers feel old…
Unless you meant that AI won’t remove entire features from the code. But AI can do that too if you prompt it to. I think the bigger issue is that companies don’t put enough value on removing things and only focus on adding new features. That’s not a problem with AI though.
The one actual major downside to AI is that PM and higher are now looking for problems to solve with it. I haven't really seen this before a lot with technology, except when cloud first became a thing and maybe sometimes with Microsoft products.
Please don't do this :) Readable code is better than clever code!
I should also note that development style also depends on tools, so if your IDE makes inline functions more readable in it's display, it's fine to use concisely defined lambdas.
Readablity is a personal preference thing at some point after all.
I think what you're looking for is "x or 1"
https://github.com/dwmkerr/hacker-laws#kernighans-law
At least with human-written clever code you can trust that somebody understood it at one point but the idea of trusting AI generated code that is "clever" makes my skin crawl
And was the code they were writing before they had an LLM any better?
My guess would be engineers who are "forced" to use AI, already mailed management it would be an error and are interviewing for their next company. Malicious compliance: vibe code those new features and let maintainability and security be a problem for next employees / consultants.
And I know it's intentional, but yes. Add some mindfulness to your implementation
Map["blah"] = fooIsTrue;
I do see your example in the wild sometimes. I've probably done it myself as well and never caught it.
As a side note, I've had coworkers disappear for N days too and in that time the requirements changed (as is our business) and their lack of communication meant that their work was incompatible with the new requirements. So just because someone achieves a 10x speedup in a vacuum also isn't necessarily always a good thing.
A declarative framework for testing may make sense in some cases, but in many cases it will just be a complicated way of scripting something you use once or twice. And when you use it you need to call up the maintainer anyway when you get lost in the yaml.
Which of course feels good for the maintainer, to feel needed.
u/justonceokay's wrote:
> The solution to bad code is more code.
This has always been true, in all domains.
Gen-AI's contribution is further automating the production of "slop". Bots arguing with other bots, perpetuating the vicious cycle of bullshit jobs (David Graeber) and enshitification (Cory Docotrow).
u/justonceokay's wrote:
> AI will never produce a deletion.
I acknowledge your example of tidying up some code. What Bill Joy may have characterized as "working in the small".
But what of novelty, craft, innovation? Can Gen-AI, moot the need for code? Like the oft-cited example of -2,000 LOC? https://www.folklore.org/Negative_2000_Lines_Of_Code.html
Can Gen-AI do the (traditional, pre 2000s) role of quality assurance? Identify unnecessary or unneeded work? Tie functionality back to requirements? Verify the goal has been satisfied?
Not yet, for sure. But I guess it's conceivable, provided sufficient training data. Is there sufficient training data?
You wrote:
> only focus on adding new features
Yup.
Further, somewhere in the transition from shipping CDs to publishing services, I went from developing products to just doing IT & data processing.
The code I write today (in anger) has a shorter shelf-life, creates much less value, is barely even worth the bother of creation much less validation.
Gen-AI can absolutely do all this @!#!$hit IT and data processing monkey motion.
During interviews one of my go-to examples of problem solving is a project I was able to kill during discovery, cancelling a client contract and sending everyone back to the drawing board.
Half of the people I've talked to do not understand why that might be a positive situation for everyone involved. I need to explain the benefit of having clients think you walk on water. They're still upset my example isn't heavy on any of the math they've memorized.
It feels like we're wondering how wise an AI can be in an era where wisdom and long-term thinking aren't really valued.
I would argue that a plurality, if not the majority, of business needs for software engineers do not need more than a single person with those skills. Better yet, there is already some executive that is extremely confident that they embody all three.
No, because if you read your SICP you will come across the aphorism that "programs must be written for people to read, and only incidentally for machines to execute." Relatedly is an idea I often quote against "low/no code tooling" that by the time you have an idea of what you want done specific enough for a computer to execute it, whatever symbols you use to express that idea -- be it through text, diagrams, special notation, sounds, etc. -- will be isomorphic to constructs in some programming language. Relatedly, Gerald Sussman once wrote that he sought a language in which to discuss ideas with his friends, both human and electronic.
Code is a notation, like mathematical notation and musical notation. It stands outside prose because it expresses an idea for a procedure to be done by machine, specific enough to be unambiguously executable by said machine. No matter how hard you proompt, there's always going to be some vagueness and nuance in your English-language expression of the idea. To nail down the procedure unambiguously, you have to evaluate the idea in terms of code (or a sufficiently code-like notation as makes no difference). Even if you are working with a human-level (or greater) intelligence, it will be much easier for you and it to discuss some algorithm in terms of code than in an English-language description, at least if your mutual goal is a runnable version of the algorithm. Gen-AI will just make our electronic friends worthy of being called people; we will still need a programming language to adequately share our ideas with them.
In the same way that we use AI to write resumés to be read by resumé-scanning AI, or where execs use AI to turn bullet points into a corporate email only for it to be summarised into bullet points by AI, perhaps we are entering the era where AI generates code that can only be read by an AI?
If you do this you are creating a rod for your own back: You need management to see the failures & the time it takes to fix them, otherwise they will assume everything is fine & wonderful with their new toy & proceed with their plan to inflict it on everyone, oblivious to the true costs + benefits.
If at every company I work for, my manager's average 7-8 months in their role as _my_ manager, and I am switching jobs every 2-3 years because companies would rather rehire their entire staff than give out raises that are even a portion of the market growth, why would I care?
Not that the market is currently in that state, but that's how a large portion of tech companies were operating for the past decade. Long term consequences don't matter because there are no longer term relationships.
When they look at the calendar and it says May 2025 instead of April
I'm currently reading an LLM generated deletion. its hard to get an LLM to work with existing tools, but not impossible
I suspect he is pretty unimpressed by the code that LLMs produce given his history with code he thinks is subpar, but what do I know
https://blog.mathieuacher.com/LinusTorvaldsLLM/
Wasn't it like that always for most companies? Get to market fast, add features fast, sell them, add more features?
> Wasn't it like that always for most companies? Get to market fast, add features fast, sell them, add more features?
This reminds me of an old software engineering adage.
LevelsIO's flight simulator sucked. But his payoff-to-effort ratio is so absurdly high, as a business type you have to be brain-dead to leave money on the table by refusing to try replicating his success.
It's the front-end of the hype cycle. The tech-debt problems will come home to roost in a year or two.
The market can remain irrational longer than you can remain solvent.
Use LLM to write Haskell. Problem solved?
Ah yes, maintenance, the most fun and satisfying part of the job. /s
new 2025 slang just dropped
I am sure assembly programmers were horrified at the code the first C compilers produced. And I personally am horrified by the inefficiency of python compared to the C++ code I used to write. We always have traded faster development for inefficiency.
This created a league of incredibly elitist[0] programmers who, having mastered what they thought was the rules of C, insisted to everyone else that the real problem was you not understanding C, not the fact that C had made itself a nightmare to program in. C is bad soil to plant a project in even if you know where the poison is and how to avoid it.
The inefficiency of Python[1] is downstream of a trauma response to C and all the many, many ways to shoot yourself in the foot with it. Garbage collection and bytecode are tithes paid to absolve oneself of the sins of C. It's not a matter of Python being "faster to write, harder to execute" as much as Python being used as a defense mechanism.
In contrast, the trade-off from AI is unclear, aside from the fact that you didn't spend time writing it, and thus aren't learning anything from it. It's one thing to sacrifice performance for stability; versus sacrificing efficiency and understanding for faster code churn. I don't think the latter is a good tradeoff! That's how we got under-baked and developer-hostile ecosystems like C to begin with!
[0] The opposite of a "DEI hire" is an "APE hire", where APE stands for "Assimilation, Poverty & Exclusion"
[1] I'm using Python as a stand-in for any memory-safe programming language that makes use of a bytecode interpreter that manipulates runtime-managed memory objects.
And even among languages that do have a full virtual machine, Python is slow. Slower than JS, slower than Lisp, slower than Haskell by far.
That, right here, is a world-shaking statement. Bravo.
But the sentiment is true, by default current LLMs produce verbose, overcomplicated code
However I wouldn't say refactoring is as hands free as letting AI produce the code in the first place, you need to cherry pick its best ideas and guide it a little bit more.
You can't win an argument with people who don't care if they're wrong, and someone who begins a sentence that way falls into that category.
I'm adding `.noai` files to all the project going forward:
https://www.jetbrains.com/help/idea/disable-ai-assistant.htm...
AI may be somewhat useful for experienced devs but it is a catastrophe for inexperienced developers.
"That's OK, we only hire experienced developers."
Yes, and where do you suppose experienced developers come from?
Again and again in this AI arc I'm reminded of the magicians apprentice scene from fantasia.
Strictly speaking, you don't even need university courses to get experienced devs.
There will always be individuals that enjoy coding and do so without any formal teaching. People like that will always be more effective at their job once employed, simply because they'll have just that much more experience from trying various stuff.
Not to discredit University degrees of course - the best devs will have gotten formal teaching and code in their free time.
This is honestly not my experience with self taught programmers. They can produce excellent code in a vacuum but they often lack a ton of foundational stuff
In a past job, I had to untangle a massive nested loop structure written by a self taught dev, which did work but ran extremely slowly
He was very confused and asked me to explain why my code ran fast, his ran slow, because "it was the same number of loops"
I tried to explain Big O, linear versus exponential complexity, etc, but he really didn't get it
But the company was very impressed by him and considered him our "rockstar" because he produced high volumes of code very quickly
But you have to know this knowledge exists in the first place. That's part of the appeal of university teaching: it makes you aware of many different paradigms. So the day you stumble on one of them you know where to look for a solution. And usually you learn how to read (and not to fear) reading scientific papers which can be useful. And statistics.
My point is you don't know what you don't know. There is really only so far you can get by just noodling around on your own, at some point we have to learn from more experienced people to get to the next level
School is a much more consistent path to gain that knowledge than just diving in
It's not the only path, but it turns out that people like consistency
A senior dev mentioned a “class invariant” the other day And I just had no idea what that was because I’ve never been exposed to it… So I suppose the question I have is what should I be exposed to in order to know that? What else is there that I need to learn about software engineering that I don’t know that is similarly going to be embarrassing on the job if I don’t know it? I’ve got books like cracking the coding interview and software engineering at Google… But I am missing a huge gap because I was unable to finish my masters and computer science :-(
(Serious comment! It's "the" algorithms book).
> Not to discredit University degrees of course - the best devs will have gotten formal teaching and code in their free time.
>People like that will always be more effective at their job once employed
My experience is that "self taught" people are passionate about solving the parts they consider fun but do not have the breadth to be as effective as most people who have formal training but less passion. The previous poster also called out real issues with this kind of developer (not understanding time complexity or how to fix things) that I have repeatedly seen in practice.
I just pointed out that removing classes entirely would still get you experiences people. Even if they'd likely be better if they code and get formal training. I stated that very plainly
You actually didn't state it very plainly at all. Your initial post is contradictory, look at these two statements side by side
> There will always be individuals that enjoy coding and do so without any formal teaching. People like that will always be more effective at their job once employed
> the best devs will have gotten formal teaching and code in their free time
People who enjoy coding without formal training -> more effective
People who enjoy coding and have formal training -> best devs
Anyways I get what you were trying to say, now. You just did not do a very good job of saying it imo. Sorry for the misunderstanding
> There will always be individuals that enjoy coding and do so without any formal teaching. People like that will always be more effective at their job once employed
As "people who enjoy coding and didn't need formal training to get started". It includes both people who have and don't have formal training.
Both statements together are (enthusiasm + formal) > (enthusiasm without formal) > (formal without enthusiasm).
> Both statements together are (enthusiasm + formal) > (enthusiasm without formal) > (formal without enthusiasm).
I don't think the last category (formal education without enthusiasm) really exists, I think it is a bit of a strawman being held up by people who are *~passionate~*
I suspect that without any enthusiasm, people will not make it through any kind of formal education program, in reality
If you've never encountered the average 9-5 dev that just does the least amount of effort they can get away with, then I have to apploud the HR departments of the companies you've worked for. Whatever they're doing, they're doing splendid work.
And almost all of my coworkers are university grads that do literally the same you've used as an example for non formally taught people: they write abysmally performing code because they often have an unreasonable fixation on practices like inversion of control (as a random example).
As a particularly hilarious example I've had to explain to such a developer that an includes check on a large list in a dynamic language such as JS performs abysmally
While you definitely loose acuity once you stop exploring new concepts in your free time, the amount of knowledge gained after you've already spend 10-20 yrs coding drops off a cliff, making this time investment in your free time progressively less essential.
My pint was that most of my coworkers never went through an enthusiastic phase in which they coded in their free time. Neither pre university nor during or after. And it's very easy noticeable that they're not particularly good at coding either.
Personally, I think it's just that people that are good at coding inevitably become enthusiastic enough to do it in their free time, at least for a few years. Hence the inverse is true: people that didn't go through such a phase (which most of my coworkers are)... Aren't very good at it. Wherever they went to university and got a degree or not.
Does it perform any better in statically compiled languages?
You get experienced devs from inexperienced devs that get experience.
[edit: added "degrees" as intended. University was mentioned as the context of their observation]
We're talking about the industry responsible for ALL the growth of the largest economy in the history of the world. It's not the 1970s anymore. You can't just count on weirdos in basements to build an industry.
That's not the kind of experience companies look for though. Do you have a degree? How much time have you spent working for other companies? That's all that matters to them.
Almost every time I hear this argument, I realize that people are not actually complaining about AI, but about how modern capitalism is going to use AI.
Don't get me wrong, it will take huge social upheaval to replace the current economic system.
But at least it's an honest assessment -- criticizing the humans that are using AI to replace workers, instead of criticizing AI itself -- even if you fear biting the hands that feed you.
I think you misunderstand OP's point. An employer saying "we only hire experienced developers [therefore worries about inexperienced developers being misled by AI are unlikely to manifest]" doesn't seem to realize that the AI is what makes inexperienced developers. In particular, using the AI to learn the craft will not allow prospective developers to learn the fundamentals that will help them understand when the AI is being unhelpful.
It's not so much to do with roles currently being performed by humans instead being performed by AI. It's that the experienced humans (engineers, doctors, lawyers, researchers, etc.) who can benefit the most from AI will eventually retire and the inexperienced humans who don't benefit much from AI will be shit outta luck because the adults in the room didn't think they'd need an actual education.
... smells very similar to tobacco/soda industry. Both created faux-research institutes to further their causes.
Considering HPC is half CPU and half GPU (more like 66% CPU and 33% GPU but I'm being charitable here), I expect an average power draw of 3.6KW in a cluster. Moreover, most of these clusters run targeted jobs. Prototyping/trial runs use much limited resources.
On the other hand, AI farms use all these GPUs at full power almost 24/7, both for training new models and inference. Before you asking, if you have a GPU farm which you do training, having inference focused cards doesn't make sense, because you can divide nVIDIA cards with MIG, so you can put aside some training cards, divide these cards to 6-7 and run inference on them, resulting ~45 virtual cards for inference per server, again at ~6.1KW load.
So, yes, AI's power load profile is different.
Emissions aside, locally many data centres (and associated bit mining and AI clusters) are a significant local issue due to local demand on local water and local energy supplies.
You must be joking. Consumer models' primary source of training data seems to be the legal preambles from BDSM manuals.
Something very similar can be said about the issue of guns in America. We live in a profoundly sick society where the airwaves fill our ears with fear, envy and hatred. The easy availability of guns might not have been a problem if it didn't intersect with a zero-sum economy.
Couple that with the unavailability of community and social supports and you have a a recipe for disaster.
This was pretty consistently my and many others viewpoint since 2023. We were assured many times over that this time it would be different. I found this unconvincing.
Why? I feel less competent at my job. I feel my brain becoming lazy. I enjoy programming a lot, why do I want to hand it off to some machine? My reasoning is that if I spend time practicing and getting really good at software engineering, my work is much faster, more accurate and more reliable and maintainable than an AI agent's.
In the long run, using LLMs for producing source code will make things a lot slower, because the people using these machines will lose the human intuition that an AI doesn't have. Be careful.
It tends to be the same with anything hyped / divisive. Human tend to exaggerate in both direction in communication, especially in low-stake environment such as an internet forum, or when they stand to gain something from the hype.
For example, some companies are using AI to create tickets or to collate feedback from users.
I can clearly see that this is making them think far less through the problem and a lot of this sixth sense understanding of the problem space happens through working through these ticket creation or product creation documents which are now being done by AI.
That is causing the quality of the work to become this weird drone like NPC like state where they aren't really solving real issues yet they're getting a lot of stuff done.
It's still very early so I do not know how best to talk to them about it. But it's very clear that any sort of creative work, problem solving, etc has huge negative implications when AI is used even a little bit.
I have also started to think that a great angel investment question is to ask companies if they are a non AI zone and investing in them will bring better returns in the future.
I just spent a week fixing a concurrency bug in generated code. Yes, there were tests; I uncovered the bug when I realized the test was incorrect...
My strong advice, is to digest every line of generated code; don't let it run ahead of you.
Of course yeeting bad code into production with a poor review process is already a thing. But this will scale that bad code as now you have developers who will have grown up on it.
We humans have way more context and intuition to rely on to implement business requirements in software than a machine does.
I think this is the biggest risk. You sometimes get stuck in a cycle in which you hope the AI can fix its own mistake, because you don’t want to expend the effort to understand what it wrote.
It’s pure laziness that occurs only because you didn’t write the code yourself in the first place.
At the same time, I find myself incredibly bored when typing out boilerplate code these days. It was one thing with Copilot, but tools like Cursor completely obviate the need.
I recently played through a game and after finishing it, read over the reviews.
There was a brief period after launch where the game was heavily criticised for its use of AI assets. They removed some, but apparently not all (or more likely, people considered the game tainted and started claiming everything was AI)
The (I believe) 4 person dev team used AI tools to keep up with the vast quantity of art they needed to produce for what was a very art heavy game.
I can understand people with an existing method not wanting to change. And AI may not actually be a good fit for a lot of this stuff. But I feel like the real winners are going to be the people who do a lot more with a lot less out of sheer necessity to meet outrageous goals.
Perhaps we would be able to synthesize some text, voice and imaging. Also AI can support coding.
While AI can probably do a snake game (that perhaps runs/compiles) or attempt to more or less recreate well known codebases like that of Quake (which certainly does not compile), it can only help if the developer does the main work, that is disecting problems into smaller ones until some of them can be automated away. That can improve productivity a bit and certainly could improve developer training. If companies were so inclined to invest in their workforce...
I can only hope endeavors (experiments?) like this extreme fail fast and we learn from it.
I'm deeply influenced by languages like Forth and Lisp, where that kind of bottom up code is the cultural standard and and I prefer it, probably because I don't have the kind of linear intelligence and huge memory of an LLM.
For me the hardest part of using LLMs is knowing when to stop and think about the problem in earnest, before the AI generated code gets out of my human brain's capacity to encompass. If you think a bit about how AI still is limited to text as its white board and local memory, text which it generates linearly from top to bottom, even reasoning, it sort of becomes clear why it would struggle with genuine abstraction over problems. I'm no longer so naive as to say it won't happen one day, even soon, but so far its not there.
That business model hasn't been going so well in recent years[0], and it's already been proclaimed dead in some corners of the industry[1]. Many industry legends have started their own studios (H. Kojima, J. Solomon, R. Colantonio, ...), producing games for the right reasons. When these games are inevitably mainstream hits, that will be the inflection point where the old industry will significantly decline. Or that's what I think, anwyay.
[0] https://www.matthewball.co/all/stateofvideogaming2025
[1] https://www.youtube.com/watch?v=5tJdLsQzfWg
AI is considered a potential future growth engine, as it cuts costs in art production, where the bulk of game production costs lie. Game executives are latching onto it hard because it's arguably one of the few straightforward ways to keep growing their publicly-traded companies and their own stock earnings. But technologists already know how this will end.
Other games industry leaders are betting on collapse and renewal to simpler business models, like self-funded value-first games. Also, many bet on less cashflow-intensive game production, including lower salaries (there is much to be said about that).
Looking at industry reports and business circle murmurs, this is the current state of gaming. Some consider it optimistic, and others (especially the business types without much creative talent) - dire. But it does seem to be the objective situation.
[0] VC investment has been down by more than 10x over the last two years, and many big Western game companies have lost investors' money in the previous five years. See Matthew Ball's report, which I linked in my parent comment, for more info.
[1] The games industry has seen more than 10% sustained attrition over the last 5 years, and about 50% of employees hope to leave their employer within a year: https://www.skillsearch.com/news/item/games---interactive-sa...
I just don't think that's true in a world where Marvel Rivals was the biggest launch of 2024. Live service games like Path of Exile, Counter-Strike, Genshin Impact, etc. make boatloads of money and have ever rising player counts.
The problem is that it's a very sink-or-swim market - if you manage to survive 2-3 years you will probably make it, but otherwise you are a very expensive flop. Not unlike VC-funded startups - just because some big names failed doesn't make investing into a unicorn any less attractive.
Outside of live service everyone is also looking for that new growth driver. In my opinion the chances are though we're in for a longish period of stagnation. I don't even share the OPs rosey outlook towards more "grassroots" developers. Firstly because they're still businesses even with a big name attached. Secondly because there is going to be a bloodbath due to the large number of developers pivoting in that direction. It'll end up like the indie market where there are so many entrants success is extremely challenging to find.
I wrote a project where I'd initially hardcoded a menu hierarchy into its Rust. I wanted to pull that out into a config file so it could be altered, localized, etc without users having it and recompile the source. I opened a “menu.yaml” file, typed the name of the top-level menu, paused for a moment to sip coffee, and Zed popped up a suggested completion of the file which was syntactically correct and perfect for use as-is.
I honestly expected I’d spend an hour mechanically translating Rust to YAML and debugging the mistakes. It actually took about 10 seconds.
It’s also been freaking brilliant for writing docstrings explaining what the code I just manually wrote does.
I don't want to use AI to write my code, any more than I'd want it to solve my crossword. I sure like having it help with the repetitive gruntwork and boilerplate.
Before AI, there was out-sourcing. With mass-produced cheap works, foreign studios eliminated most junior positions.
Now AI is just taking this trend to its logical extreme: out-sourcing to machines, the ultimate form of out-sourcing. The cost approaches to 0 and the quantity approaches to infinity.
Also management: "I need you to play with AI and try to find a use for it"
I am content to use the AI to perform "menial" tasks: I had a textfile in something parsable by field with some minor quirks (like right justified text) and was able to specify the field SEMANTICS in a way that made for a prompt to an ICS file calendar which just imported fine as-is. Getting a years forward planning from a texttual note in some structure into calendar -> import -> from-file was sweet. Do I need to train an AI to use a token/API key to do this directly? No. But thinking about how I say efficiently what fields are, and what the boundaries are, helps me understand my data.
BTW while I have looked at a ICS file and can see it is type:value, I have no idea of the types, or what specific GMT/Z format it wants for date/time, or the distinctions of meaning for confirmed/pending or the like. These are higher level constructs which seem to have made useful distinct behaviours in the calendar and the AI description of what it had done, and what I should expect lined up. I did not e.g. stipulate the mappings from semantic field to ICS type. I did say "this is a calendar date" and it did the rest.
I used AI to write a DJANGO web to do some trivial booking stuff. I did not expect the code to run as-is, but it did. Again, could I live with this product? Yes, but the extensibility worries me. Adding features, I am very conscious one wrong prompt and it can turn this into .. drek. It's fragile.
Some problems are too big to surrender judgment. Some problems are solved differently depending on what you want to optimize. Sometimes you want to learn something. Sometimes there's ethics.
I like surrender judgement. Its loss of locus of control. I also find myself asking if there are ways the AI systems "monetize" the nature of problems being put forward for solutions. I am probably implicitly giving up some IPR asking these questions, I could even be in breach of an NDA in some circumstances.
Some problems should not be put to an anonymous external service. I doubt the NSA wants people using claude or mistral or deepseek to solve NSA problems. Unless the goal, is to feed misinformation or mis-drection out into the world.
To pull this out of the games industry for just a moment, imagine this: you are a business and need a logo produced. Would you hire someone at the market price who uses AI to generate something... sort of on-brand they most definitely cannot provide indemnity cover for (considering how many of these dubiously owned works they produce), or would you pay above the market price to have an artist make a logo for you that is guaranteed to be their own work? The answer is clear - you'd cough up the premium. This is now happening on platforms like UpWork and Fiverr. The prices for real human work have not decreased; they have shot up significantly.
It's also happening slowly in games. The concept artists who are skilled command a higher salary than those who rely on AI. If you depend on image-generating AI to do your work, I don't think many game industry companies would hire you. Only the start-ups that lack experience in game production, perhaps. But that part of the industry has always existed - the one made of dreamy projects with no prospect of being produced. It's not worth paying much attention to, except if you're an investor. In which case, obviously it's a bad investment.
Besides, just as machine-translated game localization isn't accepted by any serious publisher (because it is awful and can cause real reputational damage), I doubt any evident AI art would be allowed into the final game. Every single piece of that will need to be produced by humans for the foreseeable future.
If AI truly can produce games or many of their components, these games will form the baseline quality of cheap game groups on the marketplaces, just like in the logo example above. The buyer will pay a premium for a quality, human product. Well, at least until AI can meaningfully surpass humans in creativity - the models we have now can only mimic and there isn't a clear way to make them surpass.
Only if companies value/recognize those real skills over that of the alternative, and even if they do, companies are pretty notorious for choosing whatever is cheapest/easiest (or perceived to be).
It's "hopeful" that the future of all culture will resemble food, where the majority have access to McDonalds type slop while the rich enjoy artisan culture?
Most people's purchasing power being reduced is a separate matter, more related to the eroding middle class and greedflation. Many things can be said about it, but they are less related to the trend I highlighted. Even if, supposing the middle class erosion continues, the scenario you suggest may very well play out.
Sounds like an "idea guy" rather than an art director or designer. I would do this exact same thing, but on royalty-free image websites, trying to get the right background or explanatory graphic for my finance powerpoints. Unsurprisingly, Microsoft now has AI "generating" such images for you, but it's much slower than what I could do flipping through those image sites.
This right here is the key. It's that stench of arrogance of those who think others have a "problem" that needs fixing, and that they are in the best position to "solve" it despite having zero context or experience in that domain. It's like calling the plumber and thinking that you're going to teach them something about their job.
The "AI" generated code is just code extracted from various sources used for training, which could not be used by a human programmer because most likely they would have copyrights incompatible with the product for which "AI" is used.
All my life I could have written much faster any commercial software if I had been free to just copy and paste any random code lines coming from open-source libraries and applications, from proprietary programs written for former employers or from various programs written by myself as side projects with my own resources and in my own time, but whose copyrights I am not willing to donate to my current employer, so that I would no longer be able to use in the future my own programs.
I could search and find suitable source code for any current task as fast and with much greater reliability than by prompting an AI application. I am just not permitted to do that by the existing laws, unlike the AI companies.
Already many decades ago, it was claimed that the solution for enhancing programmer productivity is more "code reuse". However "code reuse" has never happened at the scale imagined in the distant past, but not because of technical reasons, but due to the copyright laws, whose purpose is exactly to prevent code reuse.
Now "AI" appears to be the magical solution that can provide "code reuse" at the scale dreamed a half of century ago, by escaping from the copyright constraints.
When writing a program for my personal use, I would never use an AI assistant, because it cannot accelerate my work in any way. For boilerplate code, I use various templates and very smart editor auto-completion, there is no need of any "AI" for that.
On the other hand, when writing a proprietary program, especially for some employer that has stupid copyright rules, e.g. not allowing the use of libraries with different copyrights, even when those copyrights are compatible with the requirements of the product, then I would not hesitate to prompt an AI assistant, in order to get code stripped of copyright, saving thus time over rewriting an equivalent code just for the purpose of enabling it to be copyrighted by the employer.
If you proposed something like GitHub Copilot to any company in 2020, the legal department would’ve nuked you from orbit. Now it’s ok because “everyone is doing it and we can’t be left behind”.
Edit: I just realized this was a driver for why whiteboard puzzles became so big - the ideal employee for MSFT/FB/Google etc was someone who could spit out library quality, copyright-unencumbered, “clean room” code without access to an internet connection. That is what companies had to optimize for.
I think that's part of the reason why devs like working from home and not be spied on.
My last boss told me essentially (paraphrasing), "I budget time for your tasks. If you finish late, I look like I underestimate time required, or you're not up to it. If you finish early, I look like I overestimate. If I give you a week to do something, I don't care if you finish in 5 minutes, don't give it to me until the week is up unless you want something else to do."
I’m in a Fortune 500 software company and we are also being pushed AI down our throats, even though so far it has only been useful for small development tasks. However our tolerance for incorrectness is much, much lower—and many skip levels are already realizing this.
So, where are the games with AI-generated content? Where are the reviews that praise or pan them?
(Remember, AI is a tool. Tools take time to learn, and sometimes, the tool isn't worth using.)
You'd hope so, but I'm not so sure. Media developments are not merely additive, at least with bean counters in charge. Certain formats absolutely eclipse others. It's increasingly hard to watch mainstream films with practical effects or animal actors. Even though most audiences would vastly prefer the real deal, they just put up with it.
It's easy to envision a similar future where the budget for art simply isn't there in most offerings, and we're left praising mediocre holdout auteurs for their simple adherence to traditional form (not naming names here).
I've got a refurb homelab server off PCSP with 512gb ram for <$1k, and I run decently good LLM models (Deepseek-r1:70b, llama3.3:70b). Given your username, you might even try pitching a GPU server to them as dual-purpose; LLM + hashcat. :)
As a small data point, I don't think AI can make movies worse than they currently are. And they are as bad as they are for commercial but non-AI reasons. But if the means to make movies using AI or scene-making tools build with a combo of AI and maybe game engine platforms puts the ability to make movies into the hands of more artistic people, the result may be more technologically uninteresting but nonetheless more artistically interesting because of narrative/character/storytelling vectors. Better quality for niche audiences. It's a low bar, but it's one possible silver lining.
That said I agree with your second paragraph. I think we will see an explosion of high quality niche products that would never have been remotely viable before this.
People left alone for a race to the bottom, does not end well, it seems..
Creativity emerges through a messy exploration and human experience -- but it seems no one has time for that these days. Managers have found a shiny new tool to do more with less. Also, AI companies are deliberately targeting executives with promises of cost-cutting and efficiency. Someone has to pay for all the R&D.
Notably, a good number the examples were just straight-up bad management, irrespective of the tools being used. I also think some of these reactions are people realizing that they work for managers or in businesses that ultimately don't really care about the quality of their work, just that it delivers monetary value at the end.
https://news.ycombinator.com/item?id=43613079
lol I will point out that this has been an enormous problem in the game industry for long, long before generative AI existed.
Am I the only one that thinks this is kind of a given regardless of the merits of the objection?
This is a key insight. The other insight is that devs spend most of their time reading and debugging code, not writing it. AI speeds up the writing of code but slows down debugging... AI was trained with buggy code because most code out there is buggy.
Also, when the codebase is complex and the AI cannot see all the dependencies, it performs a LOT worse because it just hallucinates the API calls... It has no idea what version of the API it is using.
TBH, I don't think there exists enough non-buggy code out there to train an AI to write good code which doesn't need to be debugged so much.
When AI is trained on normal language, averaging out all the patterns produces good results. This is because most humans are good at writing with that level of precision. Code is much more precise and the average human is not good at it. So AI was trained on low-quality data there.
The good news for skilled developers is that there probably isn't enough high quality code in the public domain to solve that problem... And there is no incentive for skilled developers to open source their code.
Large corporations will use AI to deliver low-quality software at high speed and high scale.
"Artisan" developers will continue to exist, but in much smaller numbers and they will mostly make a living by producing refined, high-quality custom software at a premium or on creative marketplaces. Think Etsy for software.
That's the world we are heading for, unless/until companies decide LLMs are ultimately not cost beneficial or overzealous use of them leads to a real hallucination induced catastrophe.
This is why when you hear people talk about how great it is at producing X, our takeaway should be "this person is not an expert at X, and their opinions can be disregarded"
They are telling on themselves that they are not experts at the thing they think the AI is doing a great job at
I'm playing devil's advocate somewhat here but it often seem like that there's a bunch of people on both sides using hella motivated reasoning because they have very strong feelings that developed early on in their exposure to AI.
AI is both terrible and wonderful. It's useless and some things and impressive at others. It will ruin whole sectors of the economy and upturn lives. It will get better and it is getting better so any limitations you currently observe are probably termporary. The net benefit for humanity may turn out to be positive or negative - it's too early to tell.
That's kind of my problem. I am saying that it mostly only appears impressive to people who don't know better
When people do know better it comes up short consistently
Most of the pro AI people I see are bullish about it on things they have no idea about, like non-technical CEOs insisting that it can create good code
I disagree with that part and I don't think this opinion can be sustained by anyone using it with any regularity in good faith
People can argue whether it's 70/30 or 30/70 or what domains it's more useful in than others but you are overstating the negative.
It's just a tool, but it is unfortunately a tool that is currently dominated by large-sized corporations, to serve Capitalism. So it's definitely going to be a net-negative.
Contrast that to something like 3D printing, which has most visibly benefited small companies and individual users.
https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect
It's very clear the "value" of the LLM generation is to churn out low-cost, low-quality garbage. We already outsourced stuff to Fivrr but now we can cut people out altogether. Producing "content" nobody wants.
1. productivity and quality is hard to measure
2. the codebase they are ruining is the same one I am working on.
We're supposed to have a process for dealing with this already, because developers can ruin a codebase without ai.
"more code faster" is not a good thing, it has never been a good thing
I'm not worried about pro AI workers ruining their codebases at their jobs
I'm worried about pro AI coworkers ruining my job by shitting up the codebases I have to work in
Pump the brakes there
You may have bought into some PMs idea of what we do, but I'm not buying it
As professional, employed software developers, the entire point of what we do is to provide value to our employers.
That isn't always by delivering features to users, it's certainly not always by delivering features faster
Why the hand-wringing? Well, for one thing, as a developer I still have to work on that code, fix the bugs in it, maintain it etc. You could say that this is a positive since AI slop would provide for endless job security for people who know how to clean up after it - and it's true, it does, but it's a very tedious and boring job.
But I'm not just a developer, either - I'm also a user, and thinking about how low the average software quality already is today, the prospect of it getting even worse across the board is very unpleasant.
And as for things taking care of themselves, I don't think they will. So long as companies can still ship something, it's "good enough", and cost-cutting will justify everything else. That's just how our economy works these days.
If a company fails to compete in the market and dies, there is no "autopsy" that goes in and realizes that it failed because of a chain-reaction of factors stemming from bad AI-slop code. And execs are so far removed from the code level, they don't know either, and their next company will do the same thing.
What you're likely to end up with is project managers and developers who do know the AI code sucks, and they'll be heeded by execs just as much they are now, which is to say not at all.
And when the bad AI-code-using devs apply to the next business whose execs are pro-AI because they're clueless, guess who they'll hire?
Real wealth creation will come from other domains. These new tools (big data, ML, LLMs, etc) unlock the ability to tackle entirely new problems.
But as a fad, "AI" is pretty good for separating investors from their money.
It's also great for further beating down wages.
I believe AI is a variation of this, except a library at least has a license.
On the other hand, AI can be useful and can accelerate a bit some work.
This is satire right?
Obviously some workers have a strong incentive to oppose adoption, because it may jeopardize their careers. Even if the capabilities are over-stated it can be a self-fulfilling prophecy as higher-ups choices may go. Union shops will try to stall it, but it's here to stay. You're in a globally competitive market.
There is a bunch of programmers who like ai, but as the article shows, programmers are not the only people subjected to ai in the workplace. If you're an artist, you've taken a job that has crap pay and stability for the amount of training you put in, and the only reason you do it is because you like the actual content of the job (physically making art). There is obviously no upside to ai for those people, and this focus on the managers' or developers' perspective is myopic.
I think for the most part creatives will still line up for these gigs, because they care about contributing to the end products, not the amount of time they spend using Blender.
Re-read what I wrote. You repeated what I said.
> So: from the perspective of a large part of the workforce it is completely true and rational to say that ai at their job has mostly downsides.
For them, maybe.
If you start by replacing menial labor, there will be more unemployment but you’re not going to build the political will to do anything because those jobs were seen as “less than” and the political class will talk about how good and efficient it is that these jobs are gone.
You need to start by automating away “good jobs” that directly affect middle/upper class people. Jobs where people have extensive training and/or a “calling” to the field. Once lawyers, software engineers, doctors, executives, etc get smacked with widespread unemployment, the political class will take UBI much more seriously.
They'll use their clout—money, lobbying, and media influence—to lock in their advantage and keep decision-making within their circle.
In the end, this setup would just widen the gap, cementing power imbalances as AI continues to reshape everything. UBI will become the bare minimum to keep the masses sedated.
if you job consists of reading from a computer -> thinking -> entering things back into a computer, you're on the top of the list because you don't need to set up a bunch of new sensors and actuators. In other words… the easier it is to do your job remotely, the more likely it is you’ll get automated away
There's also the fact that "they" aren't all one and the same persons with the exact same worldview and interests.
You might say "but why not use just 1% of that GDP on making sure the rest of humanity lives in at least minimal comfort"? But clearly -- we already choose not to do that today. 1% of the GDP of the developed world would be more than enough to solve many horrifying problems in the developing world -- what we actually give is a far smaller fraction, and ultimately not enough.
Of course there is always the issue of “demand”—of keeping the factories humming, but when you are worth billions, your immediate subordinates are worth hundreds of millions, and all of their subordinates are worth a few million, maybe you come to a point where “lebensraum” becomes more valuable to you than another zero at the end of your balance?
When AI replaces the nerds (in progress), they become excess biomass. Not talking about a retarded hollywood-style apocalypse. Economic uncertainty is more than enough to suppress breeding in many populations. “not with a bang, but a whimper”
If you know any of “them”, you will know that “they” went to the same elite prep schools, live in the same cities, intermarry, etc. The “equality” nonsense is just a lie to numb the proles. In 2025 we have a full-blown hereditary nobility.
edit: answer to ianfeust6:
The West is not The World. There are over a billion Chinese, Indians, Africans…
Words mean things. Who said tree hugger? If you are an apex predator living in an increasingly cloudy tank, there is an obvious solution to the cloudyness.
Don't forget fertility rate is basically stagnant in the West and falling globally, so this seems like a waste of time considering most people just won't breed at all.
The West is not The World. There are over a billion Chinese, Indians, Africans…
Words mean things. Who said tree hugger? If you are an apex predator living in an increasingly cloudy tank, there is an obvious solution to the cloudyness.
With white-collar jobs the threat of AI feels more abstract and localized, and you still get talk about "creating new jobs", but when robots start coming off the assembly line people will demand UBI so fast it will make your head spin. Either that or they'll try to set fire to them or block them with unions, etc. Hard to say when because another effort like the CHIPS act could expedite things.
Goldman Sachs doesn't think so.
https://www.fortunebusinessinsights.com/humanoid-robots-mark...
https://finance.yahoo.com/news/humanoid-robot-market-researc...
https://www.mordorintelligence.com/industry-reports/robotics...
They don't even need to be humanoid is the thing.
No, the underlying format of "$LABOR_ISSUE can be solved by $CHANGE_JOB" comes from a place of politics, where a politician is trying to suggest they have a plan to somehow tackle a painful problem among their constituents, and that therefore they should be (re-)elected.
Then the politicians piled onto "coal-miners can learn to code" etc. because it was uniquely attractive, since:
1. No big capital expenditures, so they don't need to promise/explain how a new factory will get built.
2. The potential for remote work means constituents wouldn't need to sell their homes or move.
3. Participants wouldn't require multiple years of expensive formal schooling.
4. It had some "more money than you make now" appeal.
https://en.wikipedia.org/wiki/Learn_to_Code#Codecademy_and_C...
Forget these new taxes on Americans who buy Canadian hardwood, we can just supply logs from your eyes.
I think it's also unfortunate how the advocates for AI replacing artists in gamedev clearly think of art as a chore or a barrier to launch rather than being the whole point of what they're making. If games are art, then it stands to reason the.. art.. that goes into it is just as important as anything else. A game isn't defined just by the logic of the main loop.
There are those who adapt, those who will keep moaning about it and finally those who believe it can do everything.
First one will succeed, second one will be replaced, third one is going to get hurt.
I believe this article and the people it mentions are mostly from the second category. Yet no one with all his mind can deny that ai makes writing code faster, not necessarily better but faster, and games at the end are mostly codes.
Of course ai is going to get pushed hard by your ceo, he knows that if he doesn't, another competitor who use it will be able to produce more games, faster and less expensive.
It's actually quite easy, and not uncommon, to deny all of those things. Game code is complex and massively interwoven and relying on generative code that you didn't write and don't fully understand will certainly break down as game systems increase in complexity, and you will struggle to maintain it or make effective modifications -- so ignoring the fact that the quality is lower, there's an argument to be made that it will be "slower" to write in the long term.
I think it's also flat wrong to say games are "mostly codes" -- games are a visual medium and people remember the visual/audio experience they had playing a game. Textures, animations, models, effects, lighting, etc. all define what a game is just as much if not more than the actual gameplay systems. Art is the main differentiating factor between games when you consider that most gameplay systems are derivative of one another.
I can assure you it's not. And people are starting to realise that there is a lot of shit. And know that LLMs generate it.
And yet this is no guarantee they will succeed. In fact, the largest franchises and games tend to be the ones that take their time and build for quality. There are a thousand GTA knock-offs on Steam, but it's R* that rakes in the money.
> Francis says their understanding of the AI-pusher’s outlook is that they see the entire game-making process as a problem, one that AI tech companies alone think they can solve. This is a sentiment they do not agree with.
This seems like a you problem...
It's not a culture war until there's two sides, until a segment of the population throws a hissyfit because new ideas make them uncomfortable.
Take out the word AI and replace it with any other tool that's over-hyped or over-used, and the above statement will apply to any organization.
They weren't fired; they weren't laid off; they weren't reassigned or demoted; they got attention and assistance from the CEO and guidance on what they needed to do to change and adapt while keeping their job and paycheck at the same time, with otherwise no disruption to their life at all for now.
Prosperity and wealth do not come for free. You are not owed anything. The world is not going to give you special treatment or handle you with care because you view yourself as an artisan. Those are rewards for people who keep up, not for those who resist change. It's always been that way. Just because you've so far been on the receiving end of prosperity doesn't mean you're owed that kind of easy life forever. Nobody else gets that kind of guarantee -- why should you?
The bottom line is the people in this article will be learning new skills one way or another. The only question is whether those are skills that adapt their existing career for an evolving world or whether those are skills that enable them to transition completely out of development and into a different sector entirely.
Real knowledge here is often absend from the strongest AI prosletisers, others are more realistic about it. It still remains an awesome tool, but a limited one.
AIs today are not creative at all. They find statistical matches. They perform a different work than artists do.
But please, replace all your artwork with AI generated ones. I believe the forced "adapt" phase with that approach would realize itself rather quickly.
The CEOs in question bought what they believed to be a power tool, but got what is more like a smarter copy machine. To be clear, copy machines are not useless, but they also aren't going to drive the 200% increases in productivity that people think they will.
But because management demands the 200% increase in productivity they were promised by the AI tools, all the artists and programmers on the team hear "stop doing anything interesting or novel, just copy what already exists". To be blunt, that's not the shit they signed up for, and it's going to result in a far worse product. Nobody wants slop.
lol. I work with LLM outputs all day -- like it's my job to make the LLM do things -- and I probably speak to some LLM to answer a question for me between 10 and 100 times a day. They're kinda helpful for some programming tasks, but pretty bad at others. Any company that tried to mandate me to use an LLM would get kicked to the curb. That's not because I'm "not keeping up", it's because they're simply not good enough to put more work through.
If management is convinced of the benefits of LLMs and the workers are all just refusing to use them, the main problem seems to be a dysfunctional working environment. It's ultimately management's responsibility to work that out, but if the management isn't completely incompetent, people tasked with using them could do a lot to help the situation by testing and providing constructive feedback rather than making a stand by refusing to try and providing grand narratives about damaging the artistic integrity of something that has been commoditized from inception like video game art. I'm not saying that video game art can't be art, but it has existed in a commercial crunch culture since the 1970s.
I also have come to realize that in software development, coding is secondary to logical thinking. Logical thinking is the primary medium of every program, the language is just a means to express it. I may have not memorized as many languages as AI, but I can think better than it logically. It helps me execute my tasks better.
Also, I've been able to do all kinds of crazy and fun experiments thanks to genAI. Knowing myself I know realistically I will never learn LISP, and will always retain just an academic interest in it. But with AI I can explore these languages and other areas of programming beyond my expertise and experience much more effectively than ever before. Something about the interactive chat interface keeps my attention and allows me to go way deeper than textbooks or other static resources.
I do think in many ways it's a skill issue. People conceptualize genAI as a negation of skills, an offloading of skill to the AI, but in actuality grokking these things and learning how to work with them is its own skill. Of course managers just forcing it on people will elicit a bad reaction.
git gud