_______ __ _______
| | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----.
| || _ || __|| < | -__|| _| | || -__|| | | ||__ --|
|___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____|
on Gopher (inofficial)
HTML Visit Hacker News on the Web
COMMENT PAGE FOR:
HTML A16z partner says that the theory that weâll vibe code everything is wrong
lpeancovschi wrote 4 hours 34 min ago:
One who thinks that complex software can be "vibe coded", hasn't worked
on complex codebases.
aleph_minus_one wrote 3 hours 51 min ago:
> One who thinks that complex software can be "vibe coded", hasn't
worked on complex codebases.
I do think that I have worked on somewhat complex code bases. The
reason why they are complicated is often "political" (e.g. at some
point it was decided that this is the way to go, and from then on the
specific abstraction was used. It turned out these wishes were not a
good idea, but the code was never re-developed with a "more proper"
architecture (also for the reason that removing some insanely
convoluted feature would anger some users)).
I see no reason why some (hypothetical) AI couldn't come up with a
much better architecture (also good programmers are capable of this).
The problem is rather "getting this architecture through
politically"; for some reason "AI suggested/created it" is much more
socially accepted by managers than "programmer X considers this
change to be necessary" (I cannot understand why).
funkyfiddler369 wrote 4 hours 24 min ago:
it can but it will take one person just as long as a small team
without "AI" and that one person will carry all the frustration,
doubt, all the to do lists and imaginary pin boards and all that
other stuff programmers carry around in their heads, at work and back
home. have fun with all that.
side note: indie games are not complex software.
and most "overvalued" and "impossible" and "walking it back now"
comments are true in as many cases as they are not true and I really
do not understand these commenters. smart people should not fall into
the same category as people who think that "nobody cares" because
they never met devoted lawyers, investigative journalists and law
enforcers passionate about justice AND law. it's all so weird, man
...
rhubarbtree wrote 5 hours 33 min ago:
AI assisted coding is going to make it easier to create software.
Developers will be more productive. Non developers will be able to
create some stuff.
What this means is that very simple apps will become easy to create
quickly. So a todo manager is probably not going to be a very
successful business. Youâll be competing with many many people and it
will be commoditised.
But ultimately what happens here is the âcomplexity thresholdâ of a
sufficiently complex product needed to make money will be raised.
Existing products will become more sophisticated or, if there is not
more âsophistication ladder to climbâ then they will be
commoditised.
Thereâs just no way people are going to vibe all their software,
thatâs a very self absorbed nerd take. But on the supply side weâll
see commoditisation, price drops, and increasingly good value for the
user as features are shipped faster.
I also think that software quality is really going to tank, because
using validation to test the output of Claude is not a good way to
ensure quality or correctness. Itâll get you some of the way but you
need powerful reasoning. The most obvious evidence for this is security
flaws in AI code. Weâll see a new era of enshitification caused by AI
code. Like outsourced manufacturing though, people will buy worse stuff
at a cheaper price. That makes me sad, because I thought we were on a
path to better software, not buggier software.
kwar13 wrote 6 hours 46 min ago:
aol.com...? wow what year is this
koliber wrote 7 hours 25 min ago:
Using vibe coding to build a small specialized tool for a small company
that can be used instead of single feature of a commercial SaaS is
doable and brings value.
Using vibe coding to build something to replace an enterprise SaaS
offering for a medium to large company is not something to be taken
lightly. The tool and the code is not everything. The operating
environment, security guarantees, SLAs, support, and a bag of features
you don't need today but might tomorrow is what the SaaS offerings
bring to the table.
Imagine that I run a really good software house. I can literally build
anything you want, feature wise, better than most. I do it quickly. You
come to me and say you want to replace Slack for your team of 200,
because Slack got too expensive. I say I can do it. Because I am
feeling generous and you're my good friend, I will do it for free.
However, I will just give you the code, a CI/CD script, and a README.md
file. I will disappear and will not maintain or support your software,
nor will I give you any guarantees on how well it will work, other than
a "trust me."
I wouldn't take the offer.
zozbot234 wrote 5 hours 46 min ago:
The Matrix folks have covered the "replace Slack internal chat" case
already. They will give you the code so that you can bring the
service up internally, or you can use any 3rd party hosted solution
that provides the usual support and "enterprise" guarantees, for a
price. Why can't this model generalize to sector-specific SaaS
offerings that can now be prototyped cheaply via AI vibecoding?
cowpig wrote 5 hours 1 min ago:
Zulip is an actual slack upgrade
didntknowyou wrote 8 hours 11 min ago:
so are they saying this based on their analysis, or because they are
trying to stir up support for a non-vibe-coding startup they have have
invested.
saidnooneever wrote 8 hours 45 min ago:
its the same as why roads are still built by hand a lot and houses
etc..
it is not needed to automate everything. some joys should not be
automated away, people wont let them be either way.
the world could be much more optimal in .any place, but its boring, so
the optimisations go elsewhere.
ChicagoDave wrote 9 hours 46 min ago:
The buy vs build discussion has dramatically changed with GenAI. Some
enterprise systems need to remain vendor based, but thereâs a ton of
space for mid-size and smaller companies to build and maintain their
own systems and tons of software that were excel apps could be fully
realized departmental systems.
RamblingCTO wrote 9 hours 27 min ago:
Sorry, this take just shows that you probably are not running a
business. Having someone dedicate their whole business to a solution
to one of your problems will most likely get a better result than you
doing a hackjob you can't even maintain. Let alone the maintenance,
logistics, complexity, time etc. The economics just aren't there to
vibe code even more than 30% of the software you use.
People running businesses want to focus on their core business and
are happy to pay for pain points to go away, for money to come in or
less money to come out. It's that simple.
throw77488 wrote 9 hours 0 min ago:
You sound like a salesman. Small business will always choose 1 hour
free "hack" fix, over $50k solution with "complexity,
maintenance..". Shitty python script with DuckDB running locally on
laptop, can get you long long way.
ChicagoDave wrote 9 hours 4 min ago:
Iâve been a consultant to fortune 100 companies throughout my
career and the amount of pain they willingly endure supporting
Excel, Access, and .NET/Java applications is astounding. The desire
to eliminate these things is high, but thereâs no political will
over cost and appeasing departmental management.
I think GenAI opens Pandoraâs box and all of these decisions
change.
onion2k wrote 9 hours 28 min ago:
That's a short term view. Any system you build inhouse has to be
maintained until you replace it, and often the longer it remains in
place the harder it is to do that. You might save a small amount of
cash (which might be important at the time tbf) but you're creating a
major headache for later. Legacy code is debt, and that includes all
your code. It's also a huge problem if the maintainer leaves because
typically those small systems are owned by an individual dev who set
it up in the first place.
Everyone who founds a company needs to remember that they're building
a system of systems that all interact and influence each other, and
you have to balance short term cash flow against long term strategy.
PacificSpecific wrote 7 hours 54 min ago:
I don't understand why you appear to be downvoted for this (your
comment is faded at the time I'm reading this). It sounds like a
perfectly reasonable take.
I've certainly inherited and also caused these problems in my
younger years.
ookblah wrote 10 hours 7 min ago:
a lot of low level ops stuff is going to be eaten up imo. half the
bullshit you have to deal with is integrating data across every
platform you are using or other supposed products to help you integrate
the integrators lol. i guess if you're a huge company with 1000s of
people this is an inherent problem anyway, one you can spend millions
of dollars on.
it's not just "replace snowflake", there are a lot of times i wish i
could build a very focused thing to accelerate some of our internal
workflows and the nocode solutions either were too simplistic that you
ended up spending just as much time trying to wrangle some generic
solution to your own use case. OR it was not worth throwing significant
engineering resources behind internal ops stuff. now that barrier is
dropping fast and it's feasible for us.
whoever can create the framework/tooling for people build their own
systems will win this, but i don't think it's something that can be
"productized" like a saas.
jazzpush2 wrote 12 hours 1 min ago:
A16z partners don't know shit. Brain dead nepo-babies - how's Cluely?
ozgrakkurt wrote 12 hours 6 min ago:
..said the guy, who doesnât code anyway
zmmmmm wrote 13 hours 5 min ago:
It seems to be premised on the idea we would vibe code a replica of
what we get from SaaS. But the real point is, we would not do that. We
would vibe code something that exactly fits our business.
We have products we're paying $100k a year for and using 3% of the
functionality. And they suck. This is the target.
atlgator wrote 14 hours 2 min ago:
"You have this innovation bazooka. Why would you point it at rebuilding
payroll?" â a partner at the firm whose thesis was literally
"software is eating the world."
Apparently the meal is over and now we're just rearranging the plates.
umairnadeem123 wrote 14 hours 32 min ago:
The missing piece in this debate is that most "vibe coded" replacements
break at scale. I tried replacing a multi-step workflow with Make.com +
Airtable (not even vibe coding, just no-code automation) and it fell
apart past 2 jobs per day - rate limits, webhook failures, state
management nightmares. The real pattern I see working is not "replace
SaaS with vibe code" but rather "stitch together 5-6 specialized tools
with a thin orchestration layer you write yourself." The orchestration
is where AI actually helps - it's glue code, not the product.
seg_lol wrote 14 hours 35 min ago:
I cannot believe there has been no mention of things like n8n,
activepieces and windmill in this thread. SaaS will utterly collapse in
18 months. [1] [2]
HTML [1]: https://github.com/n8n-io/n8n
HTML [2]: https://github.com/activepieces/activepieces
HTML [3]: https://github.com/windmill-labs/windmill
martinald wrote 15 hours 2 min ago:
I sort of agree with this, but what a lot of people are missing is it's
unbelievably easy to clone a lot of SaaS products.
So I think big SaaS products are under attack from three angles now:
1) People replacing certain systems with 'vibe coded' ones, for either
cost/feature/unhappiness with vendor reasons. I actually think this is
a bigger threat than people think - there are so many BAD SaaS products
out there which cost businesses a fortune in poor
features/bugs/performance/uptime, and if the models/agents keep
improving the way they have in the last couple of years it's going to
be very interesting if some sort of '1000x' engineer in an agent can do
crazy impressive stuff.
2) Agents 'replacing' the software. As people have pointed out, just
have the agent use APIs to do whatever workflow you want - ping a
database and output a report.
3) "Cheap" clones of existing products. A tiny team can now clone a
"big" SaaS product very quickly. These guys can provide
support/infra/migration assistance and make money at a much lower price
point. Even if there is lock in, it makes it harder for SaaS companies
to keep price pressure up.
harrall wrote 14 hours 41 min ago:
But have you ever tried to clone a product or tool for yourself
before? At first itâs great because you think that you saved money
but then you start having to maintain it⦠fixing problems, filling
in gaps⦠you now realize that you made a mistake. Just because AI
can do it now doesnât mean you arenât just now having to use AI
to do the same thingâ¦
Also, agents are not deterministic. If you use it to analyze data, it
will get it right most of the time but, once in a blue moon, it will
make shit up, except you canât tell which time it was. You could
make it deterministic by having AI write a tool instead⦠except you
now have the first problem of maintaining a tool.
That isnât to say that there isnât small low hanging fruit that
AI will replace, but itâs a bit different when you need a real
product with support.
At the end of the day, you hire a plumber or use a SaaS not because
you canât do it yourself, but because you donât want to do it and
rather want someone else who is committed to it to handle it.
martinald wrote 14 hours 34 min ago:
I'm not saying _the end user_ clones it. I mean someone else does
(more efficiently with agents) and runs it as a _new_ SaaS company.
They would provide support just like the existing one would, but
arguably at a cheaper price point.
And regarding agents being non deterministic, if they write a bunch
of SQL queries to a file for you, they are deterministic. They can
just write "disposable" tools and scripts - not always doing it
thru their context.
svnt wrote 14 hours 7 min ago:
The challenge to this is that so much of the difficulty in
getting people to switch products is trust, and a couple of
people running saas with claude code has no differentiation and
no durability.
I think it will be a little different: black box the thing,
testable inputs and outputs, and then go to town for a week or
two until it is reasonable. Then open source it. Too big/complex
for an agent? Break down the black box into reasonable ideas that
could comprise it and try again. You can replace many legacy
products and just open source the thing. If the customer can
leave behind some predatory-priced garbage for a solution where
they get the code I think they would be a lot more likely to pay
for help managing/setting it up.
aobdev wrote 14 hours 12 min ago:
But isnât this what the article is saying? Even with AI
youâre still not going to build your own payroll/ERP/CRM.
sebastos wrote 14 hours 57 min ago:
Insightful points!
It would be interesting if, with all the anxiety about vibe coding
becoming the new normal, its only lasting effect is the emergence of
smaller B2B companies that quickly razzle dazzle together a bespoke
replacement for Concur, SAP, Workday, the crappy company sharepoint -
whatever. Reminds me of what people say Palantir is doing, but now
supercharged by the AI-driven workflows to stand up the âforward
deployedâ âsolutionâ even faster.
martinald wrote 14 hours 36 min ago:
Thanks,yes exactly what I think.
Or an industry specific Workday, with all of workdays features but
aimed at a niche vertical.
I wrote about this (including an approach on how to clone apps with
HAR files and agents) if you are interested.
HTML [1]: https://martinalderson.com/posts/attack-of-the-clones/
aobdev wrote 15 hours 52 min ago:
Thought exercise for those in disagreement: why would every company use
AI to build their own payroll/ERP/CRM, when just a handful of companies
could use AI to build those offerings better?
This is largely how things work now; AI may lower the cost and increase
margins, but the economics of build vs buy seem the same.
dyauspitr wrote 11 hours 33 min ago:
Every company that Iâve worked at has had to do significant
additional development work on their instance of salesforce to make
it work for them. Like 6-12 months of work with 1-3 people. I donât
know if this is common but in that case maybe going custom might be
the way to go. You get something lean, without all the cruft,
specifically built for your usecase and nothing more.
adrianwaj wrote 15 hours 16 min ago:
To avoid CRAZY SaaS charges. I left a comment further down about how
the challenge is first getting a reliable stack running underneath
whatever ends up being fast-coded. The trend will be more
decentralization - I think that'll be AI 2.0. Increasing
centralization is AI 1.0.
pmmucsd wrote 15 hours 23 min ago:
Slack is a good example. When the cost of Slack is an unreasonable
amount of your operating costs then it makes sense to clone and
maintain. The product is simple, you can basically recreate the main
functionality in a sitting. Why would you pay hundreds of thousands
of dollars for it?
andersmurphy wrote 10 hours 19 min ago:
I mean once campfire is full featured free and easy to self host.
Completely open source slack replacement.
I imagine it's also infinitely better than anything an in house
team could vibe code.
You don't need AI for a cheap slack alternative.
That's why I don't buy any of this.
Companies are not bothering with the free/open alternatives.
Unless the real power of LLMs is making it easy for greg in HR to
self host these existing alternatives. But, that a trillion dollar
market does not make.
b00ty4breakfast wrote 11 hours 7 min ago:
I have to imagine that companies pay so much money for Slack
because it's actually not that simple.
At the very least, the return is not worth the time and effort.
dehrmann wrote 11 hours 41 min ago:
Mattermost is FOSS. Why aren't companies running their own servers
to avoid Slack? Prior to OneDrive and web integration, LibreOffice
was 95% as good as MS Office, better than VibeOffice will likely
be, and it still failed to gain much traction.
rpdillon wrote 5 hours 24 min ago:
This is the key point. We've already run the experiment where the
code is free and all you need to do is host it yourself and
people still didn't opt to do that work. I don't see how AI
changes the situation.
jayd16 wrote 14 hours 42 min ago:
Slack is an hilarious example.
I can't wait for orgs to try to vibe roll their own dozen clients,
security models, and then try to talk to handle external
integrations of some kind.
nkrisc wrote 15 hours 7 min ago:
If Slack is so simple why havenât companies created their own
internal versions 10 years ago?
klodolph wrote 14 hours 43 min ago:
Every company I worked at in the past 10 years has created an
internal version of Slack. Four companies.
nkrisc wrote 3 hours 47 min ago:
I guess to provide a counterpoint to my own comment, even I
worked for a company that created their own internal social
network similar to Facebook (this was 15 years ago).
Of course it sucked and no one used it except executives and
VPs. Everyone else did just enough to meet the minimum
quarterly engagement metrics right before performance reviews.
aobdev wrote 14 hours 29 min ago:
I donât doubt it but that doesnât negate the fact that
Slack as a company exists and makes money by selling software.
My question is this: AI makes it cheaper to build software, but
ADP, SAP, and Salesforce also have access to AI and could make
cheaper versions of their products. How does AI change the
build vs buy trade off in a way that eliminates economies of
scale? My opinion and that of the article is that it doesnât.
klodolph wrote 14 hours 11 min ago:
> How does AI change the build vs buy trade off in a way that
eliminates economies of scale?
I think a more likely scenario here is that something good
and free escapes containment at some point and Slackâs core
product just kind of deflates. Not something better than
Slack, but something good enough that people donât care
about Slack any more.
I donât see it as a question of whether you build it or buy
it, but a question of the time horizon for selling messaging
software as a business strategy. Most business strategies
have a finite time horizon. How long can you continue to sell
messaging software before there are too many competing
solutions available and you stop making money from it?
rpdillon wrote 5 hours 22 min ago:
We've already ran this experiment with Zulip and
Mattermost. Slack still won.
jayd16 wrote 14 hours 42 min ago:
Why don't they sell them?
klodolph wrote 14 hours 35 min ago:
A list of outcomes:
1. They did, and still sell it. You can buy it.
2. They did, and then exited the market. Employees gradually
migrated off the internal platform.
3. They werenât in the business of selling software, and
didnât sell their internal messaging platform (which is
idiosyncratic and closely integrated with other internal
system).
aobdev wrote 15 hours 10 min ago:
Thatâs a fine example, but my question then is why does Slack
exist? Surely Fortune 500 companies are smart enough to realize
that building a slack clone is cheaper, yet they donât do that.
So now consider AI, perhaps the cost of building has decreased from
100k to 10k. What stops a Slack competitor from also building the
product for 10k and reselling it at 10% of the cost of Slack? My
point is that I donât see how AI has changed the value prop.
krisoft wrote 9 hours 13 min ago:
> my question then is why does Slack exist?
I do not actually believe that you can trivially vibe code a
viable slack replacement. But even if one day we could it
wouldnât mean that Slack as a company would just disappear
overnight.
They would hang around serving companies who havenât got the
memo yet, or who are locked in a contract, or where the internal
political situation is against such a move. The innertia of a
bunch of humans behaving like a bunch of humans would provide a
sort of âcoyote timeâ effect where the fundamentals could
fall out from under Slack yet the company would keep
âfloatingâ for a while.
It is funny how much of your question sounds like the old joke
where an economist canât believe their eyes that a $20 bill is
laying on the pavement, because surely if it were so someone
would have already picked it up. In a steady state the logic
might hold up, but we are not in a steady state.
And that is separate from why do I think it is not realistic to
just replace slack with vibe coded alternative: just in my
company some people use the web interface, some the ios app and
some the android app. To be a viable replacement you would need
all 3 platforms supported with all features. That sounds in
itself a nightmare. Then figuring out what features my company
members really use is an other nightmare. There are some who
craft custom emojis all the time, some who integrate all kind of
weird apps. We various CI and data pipeline processes integrated
with slack reporting. And then comes huddle. Video and voice chat
and screen sharing. You can even draw on someone elseâs screen
with it! IT has their needs to archive things (maybe?) or snoop
on certain things. Then comes of course interfacing with
single-sign-on. I wouldnât even volunteer to enumerate all the
different features people just at my company depend on, let alone
offer to replace it.
svnt wrote 14 hours 4 min ago:
There is value in taking a product to market and hardening it,
and no one wants to invest in something that requires headcount
for cost-savings. They want upside. But if it doesn't require
headcount and/or unlocks functions they have to negotiate for,
and the AI can keep it online and troubleshoot, that is a
different story.
Slack exists in part because ten years ago it was a lot harder
for big orgs to make good/modern software.
pylua wrote 14 hours 53 min ago:
Is it the sla and maintenance cost ? As silly as it seems it is
important for slack to work reliably, especially in case of court
orders and legal retention.
Also Is there not a self hosted open source solution that
companies can host ? Thatâs easier than ai?
Fire-Dragon-DoL wrote 15 hours 24 min ago:
Well the answer is because the cost of that software is lower than
somebody building the other software.
What happens is that all these SaaS drop in value because it is now
realistic to build them internally
aobdev wrote 15 hours 8 min ago:
Why does AI make it cheaper to build internal but not cheaper for
SaaS competitors to pop up? Everyone has access to the same tools.
Fire-Dragon-DoL wrote 14 hours 39 min ago:
Oh sure! My conclusion is that they will drop in value, not
disappear.
Basically I expect way smaller companies popping up competing
with the big ones and their offering will be priced way lower
because their payroll is way smaller.
While there is no competitor, internal tools will pop up now.
aobdev wrote 14 hours 17 min ago:
I believe that. Companies will build cheap tools today while
competitors are spinning up to undercut ADP, Salesforce, and
SAP. But what happens tomorrow? There are plenty of examples in
IT today where the reasonable option is to outsource in 90% of
cases: donât roll your own auth, donât host your own email
server, donât build your own data center. I donât see how
AI can change that, when the people who build specialized
software also have access to AI.
Another great example is open source. I think PostgreSQL being
free and usable by everyone is a more economic outcome than
every Fortune 500 company building their own database engine.
Payroll, ERP, and CRM fall into the same category of being
commodity software in a lot of cases.
gnz11 wrote 6 hours 44 min ago:
My experience is that the folks in charge of spending and
making decisions are looking at AI as another means of
outsourcing. Payroll, ERPs and CRMs went from commodity
software to subscription services and anything that is
subscription based is getting scrutinized much more heavily
now.
copperx wrote 14 hours 54 min ago:
It does make it cheaper, obviously. But the barrier to entry is
almost zero, like panhandling. That's why it can't substitute a
job.
zhubert wrote 16 hours 0 min ago:
I canât believe Iâm responding to an AOL article, butâ¦
You donât understand whatâs happening if you dismiss the leverage
provided by AI as âvibe codingâ.
AbstractH24 wrote 16 hours 8 min ago:
I once built a CRM in Google Sheets fully mirroring the data model of
Salesforce. For contact, company, deal, and call tracking for a one
sales rep business. (Before XLookup was in Google Sheets)
Did it work? Yes. Was it worth my time to maintain and scale the
âplatformâ with the company rather than outsource all that to a CRM
company? Not at all.
Time is finite. Spend your time doing what you do best, pay others to
do what they do best.
stingrae wrote 13 hours 32 min ago:
It doesn't make sense for every company to make their own Salesforce
clone.
The key is that it makes new companies entering the market to compete
with Salesforce immensely easier. More competition will just force
lower overall margins in SAAS.
hippo22 wrote 12 hours 55 min ago:
It's not really that hard to make a Salesforce clone now though.
Writing the software was never the hard part of building a
business.
georgeecollins wrote 4 hours 4 min ago:
Yeah, but its still usually cheaper to pay for software than
build and support it. I think that will be true for a long time
going forward, its just that you can't plan on extracting a
ransom for your SAAS.
zhivota wrote 6 hours 57 min ago:
I don't know, I mean for most SaaS products this is true. But for
something like Salesforce, the feature set is incredibly broad.
The coding is not hard, so much as it is just an enormous volume
of code.
IshKebab wrote 6 hours 58 min ago:
It was never the only hard part, but it definitely was a hard
part (at least in most cases; obviously there are some monopolies
with relatively simple software - mostly where there are network
effects like WhatsApp).
But give me the source code for something competitive with
Solidworks, Jasper Gold, FL Studio, After Effects, etc. and I'm
sure as hell making a business out of it!
Furthermore while good software may not guarantee business
success, it is pretty much a requirement. I have seen many
projects fail because the software turned out to be the hard
part.
ncallaway wrote 12 hours 14 min ago:
> Writing the software was never the hard part of building a
business.
This is such an important key insight that will take the vibe
coding folks another few years to really internalize.
matwood wrote 11 hours 47 min ago:
> few years to really internalize.
Given that many engineers have never internalized this,
youâre more confident than I am.
WheelsAtLarge wrote 13 hours 58 min ago:
Yup, my experience has been that vibe-coding is very time-consuming.
It reminds me very much of how LLMs are great at creating
mind-blowing images, but you get what you get. Once you decide that
you need to modify the image you get, it becomes a time sink. You
might be able to change it and get what you need, but there is no
guarantee and it's a never ending task.
The same thing happens with code; you may get great results from your
prompt, but trying to customize it will drive you nuts and you may
never get what you want.
Maintenance is another hurdle. How do you maintain code you might not
have the skills to maintain?
Vibe-coding may reduce software creation time, but it's not taking
over software engineering. The SaaS business is going nowhere. Most
people, by far, will continue to rely on someone else for their
software needs. But be very aware that the software business will
change. We are seeing that already.
anonzzzies wrote 16 hours 20 min ago:
> "You have this innovation bazooka with these models. Why would you
point it at rebuilding payroll or ERP or CRM"
They invested in ERP/CRM? I built one (fairly complete to the
German/Italy/EU tax system) and it saves a ton of money vs commercial
offerings. So yeah, of course we will.
alun wrote 16 hours 37 min ago:
> "You have this innovation bazooka with these models. Why would you
point it at rebuilding payroll or ERP or CRM"
Most SaaS companies are just expensive wrappers on top of existing
tools. For non-VC-funded companies, SaaS tools are a serious cost. If
you can re-create them in-house with AI, why wouldn't you? The result
is saving capital (which you can then employ to do the more innovative
things), and being in control over your own data.
nofriend wrote 16 hours 28 min ago:
If this is actually viable, then SaaS will (be forced to) lower costs
until it is no longer worthwhile.
bitwize wrote 17 hours 27 min ago:
Well, yeah. Vibe coding as in letting AI one-shot an app with a vague
description still doesn't work except on trivial, throwaway stuff.
But... spec-driven development with automated stepwise refinement by
agents recursively generating, testing, and improving the code is how
software engineering is done in the late 2020s.
klardotsh wrote 15 hours 37 min ago:
You write that in italics as if to imply itâs a law that cannot be
questioned. Quite a number of shops do not engineer software like
that, or only engineer software like that where it fits the
environment the software lives in, or otherwise sit at numerous
points along the gradient between âsoftware engineering as it has
been known for decadesâ and âfully computer generated
softwareâ.
dabinat wrote 18 hours 40 min ago:
The bottleneck will always be humans. You could get AI to write a
million lines of code a day, but youâd still need humans to review
and test that code. We are a very long way from being able to blindly
trust AIâs outputs in production.
cal_dent wrote 11 hours 55 min ago:
I donât even think itâs about reviewing and testing. The
bottleneck will always be humans.
We donât like to always admit it but most jobs are fairly
straightforward, as in the actual day to day tasks. Yes being smart
is great and useful etc. but after a certain point itâs diminishing
returns on the actual tasks you have to do. Dealing with other humans
and their egos and eccentricities and the multitude ways each person
sees the world is always what makes all jobs tricky. I suspect this
whole ai wave/hype/reality is going to open many peopleâs eyes to
this. We will laugh that we use to call them âsoftâ skills.
mephitix wrote 18 hours 36 min ago:
IMO I would have agreed with this statement 2 months ago but now
itâs clear AI is already much better at reviewing and even testing
code (via spinning up simulators, etc) much better than we can.
Weâre already using AIâs outputs in production and not writing
much code these days.
andrekandre wrote 17 hours 58 min ago:
> AI is already much better at reviewing and even testing
for code in isolation, perhaps, but how does it know what is
correct for what the customer wants/needs?
acuozzo wrote 12 hours 19 min ago:
> how does it know what is correct for what the customer
wants/needs?
The way NASA does it so that they can trust deliverables from the
lowest bidder.
That is, have developers translate the wants/needs into detailed
contracts of work.
kristianp wrote 19 hours 12 min ago:
> He said that software accounts for 8% to 12% of a company's expenses,
so using vibe coding to build the company's resource planning or
payroll tools would only save about 10%. Relying on AI to write code
also carries risks, he said.
> "You have this innovation bazooka with these models. Why would you
point it at rebuilding payroll or ERP or CRM," Acharya said
> Instead, companies are better off using AI to develop their core
businesses or optimize the remaining 90% of their costs
boznz wrote 19 hours 35 min ago:
Never say never, vibe coding is not even 4 years old.
thomasjudge wrote 20 hours 40 min ago:
"aol.com"?
obiefernandez wrote 20 hours 42 min ago:
I just recreated most of Linear for my company in a few days. Making it
hyper specific to what we want (metrics driven, lean startup style).
All state changes are made with MCP so it saved me from having to spend
time on any forms and most interactions other than filtering searching
sorting etc.
Means we will be ditching Linear soon.
I know Iâm an outlier but this sort of thing will get more common.
satvikpendem wrote 20 hours 16 min ago:
I don't understand this because who's gonna maintain it in the
future? Surely that costs more to pay even one person to add features
that Linear had than to pay Linear themselves. I'd do this for
personal projects but never for my work company lest I be the one to
maintain it indefinitely on top of my current work.
pizzly wrote 18 hours 26 min ago:
one thing annoying with premade solutions is that it only does 90%
of what you want, its livable but still doesn't quite meet your
needs.
Its not just adding features that Linear already provides but
adding features and integrations that mets 100% your needs.
The full decision making equation is (cost of implementing it
yourself + cost of maintenance + 10% additional benefit for a
solution that fully meets your needs) versus (cost of preexisting
solution that meets 90% of your needs). Cost of implementing it and
cost of maintenance has just gone down. Surely that will mean on a
whole more people as a whole will choose to make inhouse rather
than outsource.
Thus demand for premade solutions will go down, Saas providers
won't be able to increase their prices as this will make even more
people choose to implement it themselves. The cost of producing
software will continue to drop due to agentic coding and
maintenance cost will drop as well due to maintenance coding
agents. More people will choose their own custom solutions and so
on. Its very possible we are in the beginning of the end for Saas
companies.
satvikpendem wrote 18 hours 20 min ago:
I think even with vibe coding people definitely still
underestimate the stuff mentioned in this comment about IaaS:
> server operations, storage, scalability, backups, security,
compliance, etc
HTML [1]: https://news.ycombinator.com/item?id=47097450
ManuelKiessling wrote 20 hours 47 min ago:
There was a short moment in history where it seemed that the sentiment
was: people will soon 3D-print 99% of their household items themselves
instead of buying them.
You absolutely could print things like cups, soap holders, picture
frames, the small shovel you use for gardening, and so on an so on.
99% of people still just buy this stuff.
throwaway314155 wrote 17 hours 2 min ago:
That has more to do with the shortcomings of 3d printing.
alfiedotwtf wrote 11 hours 47 min ago:
Are you saying vibed code doesnât have shortcomings
klardotsh wrote 15 hours 44 min ago:
I think some or maybe even many of those shortcomings will apply to
software, too. Making actual good software is not as trivial as
writing âmake me an appâ, much as making an actual good spoon
is not as trivial as throwing an STL at a printer and calling it a
day.
bhewes wrote 20 hours 49 min ago:
Why is it bad for AI to replace an enterprise software layer? Other
than invalidating past investments.
captainbland wrote 20 hours 31 min ago:
A few reasons, "AI" as used by non-experts often has correctness and
security issues. Even when it doesn't, its outputs are often not
reproducible/predictable because they're probabilistic systems.
AI systems are also prone to writing code which they can't
effectively refactor themselves, implying that many of these code
bases are fiscal time bombs where human experts are required to come
fix them. If the service being replaced has transactional behaviour,
does the AI produced solution? Does the person using it know what
that means?
The other side is that AI as an industry still needs to recoup
trillions in investment, and enterprise users are potential whales
for that. Good prices in AI systems today are not guaranteed to last
because even with hardware improvements these systems need to make
money back that has been invested in them.
tehjoker wrote 16 hours 30 min ago:
Some of that latter part depends on how good and cheap open weight
systems get. The ability to deploy your own will strictly limit the
price of closed models if they aren't dominant in functionality.
random3 wrote 20 hours 51 min ago:
AI is eating the software
HTML [1]: https://a16z.com/why-software-is-eating-the-world/
Rastonbury wrote 21 hours 24 min ago:
Anyone who's seen an enterprise deal close or dealt with enterprise
customer requests will know this, the build vs buy calculus has always
been there yet companies still buy. Until you can get AI to the point
where it equivalent to a 20 person engineering team, people are not
going to build their own Snowflake, Salesforce, Slack or ATS. Maybe
that day is 3 years away but when that happens the world will be very
different
bonesss wrote 5 hours 52 min ago:
Weâve also got to consider the fourth dimension, what happens over
time.
Salesforce is getting LLM superpowers at the same time the Enterprise
is, so customizing and maintaining and extending Salesforce are all
getting cheaper and better and easier for customers, consultants, and
Salesforce in parallel.
Unless the LLMs are managing the entire process thereâs still a
value proposition around liability, focus, feature updates,
integrations, etc. Over time that tech should make Salesforce get way
cheaper, or, start helping them upsell bigger and badder Sales things
that are harder to recreate.
And, big picture, the LLMs are well trained on Salesforce API code.
Homegrown âfreeâ versus industry-standard with clear billing,
whatever we know versus man-decades of learning at a vendor, months
of effort and all the risk & liability versus turnkey with built-in
escape goats⦠at some point youâre paying money not to own, not
to learn, not to be distracted, and to have jerks to sue if something
goes bad.
jcgrillo wrote 12 hours 56 min ago:
If an AI agent ever became as productive at writing code as a
well-organized 20 person engineering team you'd still need to run it
for a year or more to replicate any nontrivial SaaS product.
And the thing about many of these products isn't their feature set,
it's their stability. It's their uptime. It's how they handle scaling
invisibly and with no effort on your part. These are things you can't
just write down from whole cloth, they are properties that emerge
over time by adapting the the reality of scale. Coding isn't the
whole deal, and your 20x clanker which can do nothing but re-arrange
text in interesting patterns is going to have some trouble with the
realities of taking that PoC to production. You'll still need
experienced, capable people for that. And lots of time.
A lot of this "ermahgerd everything will change" drivel is based on
some magical fundamentally new technology emerging in the near future
that can do things that LLMs cannot do. But as far as anyone knows,
that future may be never.
So even given a large improvement in agentic coding I'm not convinced
it really changes the build vs buy equation much.
owlstuffing wrote 15 hours 43 min ago:
Imagine a 20 person engineering team that hallucinates on a regular
basis and is incapable of innovation.
alex_suzuki wrote 11 hours 25 min ago:
I think youâve just described an average Accenture setup.
geraneum wrote 20 hours 12 min ago:
> Until you can get AI to the point where it equivalent to a 20
person engineering team
I think thatâs gonna happen when you donât need software and AI
just does it all.
rckclmbr wrote 19 hours 43 min ago:
Exactly. I was building an app to track bike part usage. It was an
okay app, but then I just started using ai with the database
directly. Much more flexible, and I can get anything I need right
then. AI will kill a lot of companies, but it wonât be the
software it develops, it will be the agent itself
adrianwaj wrote 15 hours 25 min ago:
Do you run the app locally?
If it's not local, I saw this comment: [1] "This entire stack
could give you computing power equivalent to a 25k euro/month AWS
bill for the cost of electricity (same electricity cost as
running a few fridges 24/7) plus about 50k euros one-time to set
it up (about 4 Mac Studios). And yes, it's redundant, scalable,
and even faster (in terms of per-request latency) than standard
AWS/GCP cloud bloat. Not only is it cheaper and you own
everything, but your app will work faster because all services
are local (DB, Redis cache, SSD, etc.) without any VM overhead,
shared cores, or noisy neighbours."
Makes me think there will be these prompts like "convert this app
to suit a new stack for my hardware for locally-optimized
runtime."
How are people building the best local stacks? Will save people a
ton of money if done well.
HTML [1]: https://news.ycombinator.com/item?id=47085906
fud101 wrote 16 hours 42 min ago:
Yep, we'll evolve patterns which facilitate system to system
interaction better than the ones we had built for human in the
loop by humans. That's inevitable. CRUD apps with a frontend will
be considered legacy etc. They'll be replaced by more efficient
means we haven't even considered. We live in an exciting time.
adrianwaj wrote 15 hours 50 min ago:
That could be AI 2.0 vs AI 1.0 like what we're in now?
Better and cheaper hardware too. Maybe it'll be DeAI?
(decentralized)
Will combine with Crypto 2.0 - whatever that may be.
fud101 wrote 15 hours 18 min ago:
The only real downside is we will collapse society but that's
a small price to pay for progress.
girvo wrote 7 hours 32 min ago:
Think of the shareholder value we made!
bensyverson wrote 20 hours 44 min ago:
I agree generally, but some of these enterprise contracts are
eye-watering. If the choice is $2M/year with a 3-year minimum
contract, or rolling your own, I think calculus really has shifted.
With that said, the entire business world does not understand that
software is more than just code. Even if you could write code
instantly, making enterprise software would still take time, because
there are simply so many high-stakes decisions to make, and so much
fractal detail.
nicoburns wrote 16 hours 9 min ago:
> If the choice is $2M/year with a 3-year minimum contract, or
rolling your own, I think calculus really has shifted.
But why? It was always dramatically cheaper for enterprises to
build rather than buy. They stopped doing that becuase they did
that in the 90s and ended up with legacy codebases that they didn't
know how to maintain. I can't see AI helping with that.
xoz123 wrote 7 hours 53 min ago:
If you consider total cost of ownership including long-term
maintenance costs, it means building has not always been cheaper
than buying. I think what's changing is that it's now becoming
dramatically cheaper to build AND operate AND maintain "good
enough" bespoke software for a lot of use cases, at least in
theory, below a certain threshold of complexity and criticality.
Which seems likely to include a sizeable chunk of the existing
SAAS market.
I can't believe I'm saying this, but I guess you don't even
really need to maintain software if it's just a tool you hacked
together in a week. You can build v2 in another week. You'll
probably want to change it anyways as your users and your org
evolve. It's a big question for me how you maintain quality in
this model but again, if your quality standard is "good enough",
we're already there.
etothepii wrote 15 hours 6 min ago:
This might be the biggest benefit of AI coding. If I have a large
legacy code base I can use AI to ask questions and find out where
certain things are happening. This benefit is huge even if I
choose not to vibe code anything. It ends up feeling a lot like
the engineer that wrote the code is still with you or documented
everything very well. In the real world there is a risk that
documentation is wrong or that the engineer misremembers some
detail so even the occasional hallucination is not a particularly
big risk.
nicoburns wrote 7 hours 17 min ago:
> This might be the biggest benefit of AI coding. If I have a
large legacy code base I can use AI to ask questions and find
out where certain things are happening. This benefit is huge
even if I choose not to vibe code anything.
I definitely agree with this.
designerarvid wrote 21 hours 18 min ago:
Companies do make/buy decisions on everything, it just software.
Cleaning services are not expensive, yet companies contract them
instead of hiring staff.
This is called transaction cost economics, if anyoneâs interested.
dnautics wrote 21 hours 32 min ago:
you cant easily vibecode everything. in my startup this is what I am
not buying (and vibecoding):
- JIRA/trello/monday.com
- benchling
- obsidian
this is what i buy and have no intent to replace:
- carta
- docusign
- gusto/rippling
- bank
this is what might be on the chopping block:
- gsuite
safety1st wrote 16 hours 41 min ago:
Just in case you weren't aware, Gsuite has a clone of Docusign built
into it now.
dnautics wrote 15 hours 57 min ago:
hate to say it, because who likes monopolies, but it's easier to
send people docusign because then they don't go Wtf?
levkk wrote 20 hours 19 min ago:
I'm curious about your reasoning. Jira/Trello etc. are like
$10/mo/seat, why bother rewriting them from scratch? You'll spend
more in tokens doing so. Same for gmail/google calendar, what's the
ROI? Those tools are reliable and cheap, why bother creating your
own?
dnautics wrote 20 hours 3 min ago:
jira/trello: ergonomics. to set them up correctly exactly the way
i want would take me 20 hours (or hire a PM), i can vibecode for
20h and get the same result.
plus, being able to crossref internal data types is chef's kiss.
im paying for claude pro so it's use it or lose it. when i finish
everything and have it battle tested i can end my claude code. and
anyways when i have 10 employees, it's parity.
for gsuite: i want to own everything internally eventually ans
having internal xrefs will be nice. the gsuite data is incidental,
what is truly valuable about gsuite is spam detection and the oauth
capability
rogerrogerr wrote 21 hours 17 min ago:
Why not Docusign? Not challenging, just curious why that is
specifically on your list. Reputation?
Ekaros wrote 7 hours 31 min ago:
Sometimes value is not in the code or the product. But the fact
that leg work is done and something is generally accepted for the
purpose. For me it looks like type of product where the pain is not
making the software. It is getting everyone you will deal with to
agree that software is acceptable.
dnautics wrote 20 hours 1 min ago:
the common factor was sort of left as an exercise to the reader to
think about moats in the age of AI... but basically anything that
has touchpoints to the legal and financial systems im not gonna
touch with a 20 ft vibecoded pole.
upmind wrote 22 hours 42 min ago:
A16Zs opinion is worthless to me, they know very little about the
market. Furthermore, they're notorious for having a lot of "partners".
thenaturalist wrote 21 hours 55 min ago:
Pretty worthless take posting an ad-hominem attack instead of
addressing the actual content of the article/ statement.
neom wrote 21 hours 56 min ago:
Depends on the partner, Peter Levine is a pretty damn good picker
(supported us series A to IPO).
HTML [1]: https://en.wikipedia.org/wiki/Peter_J._Levine
SilverElfin wrote 22 hours 5 min ago:
Has everyone forgotten about when they pumped absurd crypto scams
like NFTs
mountainriver wrote 22 hours 18 min ago:
Their whole game is just pump and dump
NinjaTrance wrote 23 hours 4 min ago:
The possibility that anyone can easily replicate any startup scares
A16Z.
themafia wrote 22 hours 56 min ago:
The incompetent have always pantomimed the competent. It never
works. Although the incompetent will always pay a huge amount to try
to achieve this fantasy.
TeMPOraL wrote 7 hours 25 min ago:
You're joking. Most startups are the incompetent. Throwing enough
money at sales and marketing can make anything work.
toomuchtodo wrote 23 hours 1 min ago:
This is what always confused me about VC AI enthusiasm. Their moat is
the capital. As AI improves, it destroys their moat. And yet, they
are stoked to invest in it, the architects of their own demise.
fullshark wrote 20 hours 58 min ago:
Thereâs no alternative, they canât collectively freeze out all
AI investment and force it to die.
ironhaven wrote 21 hours 56 min ago:
Don't you have that backwards? If AI gets so good that it can
replace all human labor, will capital like money and data centers
be the only moat left?
crazylogger wrote 17 hours 13 min ago:
Money is useful mostly for hiring human labor to outcompete
others, e.g. Satya Nadella has 100K employees under his command,
you don't, so you can't realistically compete with MS today -
this is their main moat.
If AI renders human labor a cheap commodity (say you can
orchestrate a bunch of agents to develop + market a Windows
competitor for $1000 of compute), what used to be "Satya + his
army vs. you" now becomes mostly a 1:1 fair fight, which favors
the startup.
seg_lol wrote 14 hours 27 min ago:
Frankly, you have a pretty good chance of displacing windows
right now. You should go for it.
toomuchtodo wrote 21 hours 36 min ago:
How powerful is the device you wrote this comment from? On prem
or self hosted affordable inference is inevitable.
georgemcbay wrote 21 hours 43 min ago:
> If AI gets so good that it can replace all human labor, will
capital like money and data centers be the only moat left?
If AI gets good enough to replace all human labor then actual
physical moats to keep the hungry, rioting replaced humans away
will be the most important moats.
alfiedotwtf wrote 11 hours 45 min ago:
Did you see those Chinese robots from last week? Iâm pretty
sure theyâve got their moats covered
satvikpendem wrote 20 hours 15 min ago:
Which is bought by money in the first place, see billionaire
doomsday bunkers. The poor will not have such a bunker.
acuozzo wrote 12 hours 15 min ago:
Unless they intend on generating their own oxygen to breathe,
I don't see how these bunkers stand a chance.
satvikpendem wrote 12 hours 12 min ago:
Fortunately they do.
rsrsrs86 wrote 1 day ago:
I hear and read so much shit by VCs. Both in LinkedIn and in private
meetings. Specially Menlo says a lot of shit (check LinkedIn). Deloitte
and McKinsey, also full of crap. Really.
Vcs are choke full of companies that can be cloned over night, SaaS
companies that will face ridiculously fast substitution, and a whoooole
lotta capital deployed on lousy RAGs and OpenAI Wrappers.
bigbuppo wrote 18 hours 32 min ago:
The bullshit people love the bullshit generators.
manoDev wrote 1 day ago:
People are overestimating the value on having AI create something given
loose instructions, and underestimating the value of using AI as a tool
for a human to learn and explore a problem space. The bias shows on the
terminology (âagentsâ).
We finally made the computer able to speak âourâ language - but we
still see computers as just automation. Thereâs a lot of untapped
potential in the other direction, in encoding and compressing knowledge
IMO.
georgeecollins wrote 4 hours 3 min ago:
Right! It's like maybe the AI is more of a threat to the accounts
payable person than the accounts payable software. At least in terms
of head count.
qudat wrote 4 hours 19 min ago:
100% agree. Iâd add we are underestimating our contributions in
making the code agents do the right thing as well.
preommr wrote 14 hours 43 min ago:
Because that would mean AI isn't going to replace entire industries,
which is the only way to justify the, not billions, but trillions in
market value that AI leaders keep trying to justify.
jamesmcq wrote 17 hours 18 min ago:
Exactly my thoughts - the value in AI is not auto-generating anything
more than something trivial, but there's huge value in a more
customized knowledge engine - a targeted, specific Google if you
will. Get answers to your specific question instead of results that
might contain what you were looking for if you slog through them.
AI is hugely beneficial in understanding a problem, or at least
getting a good overview, so you can then go off and solve/do it
yourself, but focusing on "just have the AI generate a solution" is
going to hugely harm AI perception/adoption.
themafia wrote 22 hours 54 min ago:
> AI create something
To have AI recreate something that was already in it's training set.
> in encoding and compressing knowledge IMO.
I'd rather have the knowledge encoded in a way that doesn't generate
hallucinations.
fsddd wrote 1 day ago:
Problem space is rich. The thing doesnt actually know what a problem
is.
The thing is incredibly good at searching through large spaces of
information.
consumer451 wrote 1 day ago:
42
fsddd wrote 16 hours 55 min ago:
Not sure what you mean by that lol
theturtletalks wrote 1 day ago:
All these articles seem to think people will vibe code by prompting:
make me my own Stripe
make me my own Salesforce
make me my own Shopify
It will be more like:
Look at how Lago, an open-source Stripe layer, works and make it work
with Authorized.net directly
Look at Twenty, an open-source CRM, and make it work in our tech stack
for our sales needs
Look at how Medusa, an open-source e-commerce platform, works and what
features we would need and bring into our website
When doing the latter, getting a good enough alternative will reduce
the need for commercial SaaS. On top of that, these commercial SaaS are
bloated with features in their attempt to work with as many use cases
as possible and configuring them is âcodingâ by another name. Throw
in Enshittification and the above seems to the next logical move by
companies looking to move off these apps.
thenaturalist wrote 21 hours 49 min ago:
I highly doubt that, and its in OPs article.
First, a vendor will have the best context on the inner workings and
best practices of extending the current state of their software. The
pressure on vendors to make this accessible and digestable to agents/
LLMs will increase, though.
Secondly, if you have coded with LLM assistance (not vibe coding),
you will have experienced the limited ability of one shot stochastic
approaches to build out well architected solutions that go beyond
immediate functionality encapsulated in a prompt.
Thirdly, as the article mentions, opportunity cost will never make
this a favorable term - unless the SaaS vendor was extorting prices
before. The direct cost of mental overhead and time of an internal
team member to hand-hold an agent/ write specs/ debug/ firefight some
LLM assisted/ vibe coded solution will not outweigh the upside
potential of expanding your core business unless you're a stagnant
enterprise product on life support.
ben_w wrote 22 hours 47 min ago:
Sensible people would do that (asking for just the features they
need), but look at us, are we sensible?
Most of us* are working for places whose analytics software
transitively asks the user for permission to be tracked by more
"trusted" partners than the number of people in a typical high
school, which transitively includes more bytes of code than the total
size of DOOM including assets, with a performance hit so bad that it
would be an improvement for everyone if the visitor remote desktop-ed
into a VM running Win95 on the server.
And people were complaining about how wasteful software was when
Win95 was new.
* Possibly an exaggeration, I don't know what business software is
like; but websites and, in my experience at least, mobile apps do
this.
nradov wrote 1 day ago:
The value in enterprise SaaS offerings isn't just the application
functionality but the IaaS substrate underneath. The vendor handles
server operations, storage, scalability, backups, security,
compliance, etc. It might be easier for companies to vibe code their
own custom applications now but LLMs don't help nearly as much with
keeping those applications running. Most companies are terrible at
technical operations. I predict we'll see a new wave of IaaS startups
that sell to those enterprise vibe coders and undercut the legacy
SaaS vendors.
tayo42 wrote 22 hours 23 min ago:
Are the infrastructure tools available already not easy enough to
build on? We have all these serverless options already.
hparadiz wrote 23 hours 12 min ago:
I've been confronting this truth personally. For years I had a
backlog of projects that I always put off because I didn't have the
capacity. Now I have the capacity but without the know how to sell
it. It turns out that everything comes back to sales and building
human relationships. Sort of a prerequisite to having operations.
selridge wrote 1 day ago:
The right move is this, turned to 11.
Velocity or one-shot capability isn't the move. It's making stuff
that used to be traumatic just...normal now.
Google fucking vibe-coded their x86 -> ARM ISA changeover. It never
would have been done without agents. Not like "google did it X%
faster." Google would have let that sit forever because the labor
economics of the problem were backwards.
That doesn't MATTER anymore. If you have some scratch, some halfway
decent engineers, and a clear idea, you can build stuff that was just
infeasible or impossible. all it takes is time and care.
Some people have figured this out and are moving now.
seg_lol wrote 14 hours 5 min ago:
Google3 was already PPC clean when they did that. Not as impressive
as made out to be.
est31 wrote 22 hours 59 min ago:
I think something like an x86 -> ARM change is perfect example of
something where LLM assisted coding shines. lots of busywork (i.e.
smaller tasks that don't require lots of context of the other
existing tasks), nothing totally novel to do (they don't have to
write another borg or spanner), easy to verify, and 'translation'.
LLMs are quite good at human language translation, why should they
be bad at translating from one inline assembly language to another?
selridge wrote 21 hours 49 min ago:
Yeah. Lots of busywork where if you had to assign it to a human
you would need to find someone with deep technical expertise plus
inordinate, unflagging attention to detail. You couldnât pass
it off to a batch of summer interns. It would have needed to be
done by an engineer with some real experience. And there is no
way in the world you could hire enough to do it, for almost any
money.
mattmanser wrote 21 hours 13 min ago:
You've missed the subtlety here.
LLMs don't have attention to detail.
This project had extremely comprehensive, easily verifiable,
tests.
So the LLM could be as sloppy as they usually arez they just
had to keep redoing their work until the code actually worked.
selridge wrote 13 hours 10 min ago:
I missed the subtlety?
I linked the paper! I read the paper. Yeah. they wrote the
tests, which is how this worked! how the heck do you think it
was supposed to work?
the fact that they needed to write the tests was just the
means to implementation. It didn't change the non-LLM labor
economics of the problem.
mattmanser wrote 10 hours 4 min ago:
No, I meant subtlety of definition, you've attributed the
diligence to the LLM when in fact it's the tests that
provide that.
You've unfortunately committed the big sin of
anthropomorphizing the LLM and calling it diligent.
An LLM cannot be diligant, it's stochastic so it's
literally impossible for it to be diligant.
Writing all those tests was diligant.
salawat wrote 19 hours 54 min ago:
Who wrote the tests?
bigbuppo wrote 18 hours 36 min ago:
The meat wrote the tests. As I've been telling you, they're
made out of meat.
jrumbut wrote 1 day ago:
> Google would have let that sit forever because the labor
economics of the problem were backwards.
This has been how all previous innovations that made software
easier to make turned out.
People found more and more uses for software and that does seem to
be playing out again.
selridge wrote 1 day ago:
I really don't think we're living in a "linearly interpolate from
past behavior" kinda situation. [1] Just read some of that. It's
not long. This IS NOT the past telescoping into the future. Some
new shit is afoot.
HTML [1]: https://arxiv.org/abs/2510.14928
theturtletalks wrote 1 day ago:
Exactly, if the engineers know where to look for the solution in
open-source code and point the AI there, it will get them there.
Even if the language or the tech stack are different, AI is
excellent at finding the seams, those spots where a feature
connects to the underlying tech stack, and figuring out how the
feature is really implemented, and bringing that over.
whatever1 wrote 1 day ago:
So maybe the saas will pivot to just sell some barebone agents that
include their real IP? The rest (UI, dashboards and connectivity)
will be tailored made by LLMs
benreesman wrote 1 day ago:
Vibecoding is a net wealth transfer from frightened people to
unscrupulous people.
Machine assisted rigorous software engineering is an even bigger wealth
transfer from unscrupulous people to passionate computer scientists.
rsrsrs86 wrote 1 day ago:
Sadly, this is the most serious comment here. People who are not
shocked are people who havenât seen what a highly educated computer
scientist can do in single player mode.
benreesman wrote 1 day ago:
Sure they have: [1] [2] [3] [4] [5] I'll take all comers, any
conceivable combination of unassisted engineers of arbitrary
Carmack/God-level ability, no budgetary limits, and I'll bet my net
worth down to starvation poverty that I will clobber them flat by
myself. This is not because I'm such hot shit, it's a weird Venn
that puts me on the early side on this, but there are others and
there will be many more as people see the results.
So there are probably people who can beat me today, and that
probability goes to one as Carmack-type people go full "press the
advantage" mode on a long enough timeline, there are people who are
strictly more talented and every bit as passionate, and the
paradigm will saturate.
Which is why I spend all my time trying to scale it up, I'm working
on how to teach other people how to do it, and solve the
bottlenecks that emerge. That's a different paradigm that saturates
in a different place, but it is likewise sigmoid-shaped.
That, and not single-player heroics, stunts basically, is the next
thousand-year paradigm. And no current Valley power player even
exists in that world. So the competition I have to worry about is
very real, but not at all legible.
I don't know much about how this will play other than it's the
fucking game at geopolitical levels, and the new boss will look
nothing like the old boss.
HTML [1]: https://news.ycombinator.com/item?id=47083506
HTML [2]: https://news.ycombinator.com/item?id=47045406
HTML [3]: https://youtu.be/uBGotJvlh7E
HTML [4]: https://youtu.be/V9YSC4gBagg
HTML [5]: https://youtu.be/ghm9F0RCFsY
godelski wrote 1 day ago:
Let's just look at Dijkstra's On the Foolishness of "Natural Language
Programming". It really does a good job at explaining why natural
language programming (and thus, Vibe Coding) is a dead end. It serves
as a good reminder that we developed the languages of Math and
Programming for a reason. The pedantic nature is a feature, not a flaw.
It is because in programming (and math) we are dealing with high levels
of abstraction constantly and thus ambiguity compounds. Isn't this
something we learn early on as programmers? That a computer does
exactly what you tell it to, not what you intend to tell it to? Think
about how that phrase extends when we incorporate LLM Coding Agents.
| The virtue of formal texts is that their manipulations, in order to
be legitimate, need to satisfy only a few simple rules; they are, when
you come to think of it, an amazingly effective tool for ruling out all
sorts of nonsense that, when we use our native tongues, are almost
impossible to avoid.
- Dijkstra
All of you have experienced the ambiguity and annoyances of natural
language. Have you ever:
- Had a boss give you confusing instructions?
- Argued with someone only to find you agree?
- Talked with someone and one of you doesn't actually understand the
other?
- Talked with someone and the other person seems batshit insane but
they also seem to have avoided a mental asylum?
- Use different words to describe the same thing?
- When standing next to someone and looking at the same thing?
- Adapted your message so you "talk to your audience"?
- Ever read/wrote something on the internet? (where "everyone" is
the audience)
Congrats, you have experienced the frustrations and limitations of
natural language. Natural language is incredibly powerful and the
ambiguity is a feature and a flaw, just like how in formal languages
the precision is both a feature and a flaw. I mean it can take an
incredible amount of work to say even very simple and obvious things
with formal languages[1], but the ambiguity disappears[2].
Vibe Coding has its uses and I'm sure that'll expand, but the idea of
it replacing domain experts is outright laughable. You can't get it to
resolve ambiguity if you aren't aware of the ambiguity. If you've ever
argued with the LLM take a step back and ask yourself, is there
ambiguity? It'll help you resolve the problem and make you recognize
the limits. I mean just look at the legal system, that is probably one
of the most serious efforts to create formalization in natural language
and we still need lawyers and judges to sit around and argue all day
about all the ambiguity that remains.
I seriously can't comprehend how on a site who's primary users are
programmers this is an argument. If we somehow missed this in our
education (formal or self) then how do we not intuit it from our
everyday interactions?
[0] [1] [2] Most programming languages are some hybrid variant. e.g.
Python uses duck typing: if it looks like a float, operates like a
float, and works as a float, then it is probably a float. Or another
example even is C, what used to be called a "high level programming
language" (so is Python a celestial language?). Give up some
precision/lack of ambiguity for ease.
HTML [1]: https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667.h...
HTML [2]: https://en.wikipedia.org/wiki/Principia_Mathematica
andrekandre wrote 17 hours 34 min ago:
> we developed the languages of Math and Programming for a reason
yes, but sadly many businesses don't care about any of that...
godelski wrote 13 hours 54 min ago:
It's extra sad because they would be more profitable if they
recognized this.
Sometimes I wonder why companies are so resistant to making
profits. It can be really strange. To be so profit focused yet
throw away so much just because it is a bit more effort or a bit
slower. But I guess most people are penny wise and pound foolish.
selridge wrote 1 day ago:
Dijkstra also said no one should be debugging and yet here we are.
He's not wrong about the problems of natural language YET HERE ARE.
That would, I think, cause a sensible engineer to start poking at the
predicate instead of announcing that the foregone conclusion is near.
We should take seriously the possibility that this isn't going to be
in a retrenchment which bestows a nice little atta boy sticker on all
the folks who said I told you so.
godelski wrote 1 day ago:
> Dijkstra also said no one should be debugging
Given how you're implying things, you're grossly misrepresenting
what he said. You've either been misled or misread. He was
advocating for the adoption and development of provably correct
programming.
Interestingly I think his "gospel" is only more meaningful today.
| Apparently, many programmers derive the major part of their
intellectual satisfaction and professional excitement from not
quite understanding what they are doing. In this streamlined age,
one of our most under-nourished psychological needs is the craving
for Black Magic, and apparently the automatic computer can satisfy
this need for the professional software engineers, who are secretly
enthralled by the gigantic risks they take in their daring
irresponsibility. They revel in the puzzles posed by the task of
debugging. They defend âby appealing to all sorts of supposed
Laws of Natureâ the right of existence of their program bugs,
because they are so attached to them: without the bugs, they feel,
programming would no longer be what is used to be! (In the latter
feeling I think âif I may say soâ that they are quite correct.)
| A program can be regarded as an (abstract) mechanism embodying
as such the design of all computations that can possibly be evoked
by it. How do we convince ourselves that this design is correct,
i.e. that all these computations will display the desired
properties? A naive answer to this question is "Well, try them
all.", but this answer is too naive, because even for a simple
program on the fastest machine such an experiment is apt to take
millions of years. So, exhaustive testing is absolutely out of the
question.
| But as long as we regard the mechanism as a black box, testing
is the only thing we can do. The unescapable conclusion is that we
cannot afford to regard the mechanism as a black box
I think it's worth reading in full
HTML [1]: https://www.cs.utexas.edu/~EWD/transcriptions/EWD02xx/EWD2...
selridge wrote 1 day ago:
>no one should be debugging
He literally said those exact words out loud from the audience
during a job talk.
And yeah, the total aim and the reason why he might just blurt
that out is because a lot of the frustration and esprit de corps
of programming is held up in writing software that's more a guess
about behavior than something provably correct. Perhaps we all
ought to be writing provably correct software and never debugging
as a result. We don't. But perhaps we ought to. We don't.
Is control via natural language a doomed effort? Perhaps, but I'd
be cautious rather than confident about predicting that.
godelski wrote 1 day ago:
> He literally said those exact words out loud from the
audience during a job talk.
Yes, I even provided the source...
Unfortunately despite being able to provide a summary I'm
unable to actually read it for you. You'll actually need to
read the whole thing and interpret it. You have a big leg up
with my summary but being literate or not is up to you. As for
me, I'm not going to argue with someone who chooses not to read
selridge wrote 21 hours 52 min ago:
I sincerely doubt you produced the source where he asked that
question in the middle of someone elseâs job talk.
Which is what I was referring to. I read what you wrote, pal.
Did you read what I wrote?
godelski wrote 20 hours 33 min ago:
> I sincerely doubt you produced the source
Either I did or didn't. What is not in question is that I
provided a source.
> I read what you wrote, pal.
Forgive me for not believing you. I linked a source and you
made speculations about what was in it. If you can't bother
to read that then why should I believe you read anything
else? Reading requires more than saying the words aloud in
your head. At least if you want to read above a 3rd grade
level. Yes, I'm being mean, but if you don't have the
patience to actually read the comment you're responding to
you then you shouldn't expect anyone to have the patience
to respond to your rude behavior with kindness.
selridge wrote 13 hours 13 min ago:
Please just read what I wrote. Please. You and I are
talking about different things. You showed me a source
for your claim and then acted like I was somehow
misreading your source when I just wasn't talking about
it.
We're basically in agreement, but you want to act like
you're teaching me something. It's irritating.
One can fully understand that the goal is to write
provable programs and yet we do not, we write programs
that need debugging. So therefore, I don't think it's
hard to imagine that if we get along with that, we may
get along with natural language in the control channel,
despite that being also proscribed in that vaunted essay
you linked to me.
falcor84 wrote 1 day ago:
> Vibe Coding has its uses and I'm sure that'll expand, but the idea
of it replacing domain experts is outright laughable.
I don't think that's the argument. The argument I'm seeing most is
that most of us SWEs will become obsolete once the agentic tools
become good enough to allow domain experts to fully iterate on
solutions on their own.
bigbuppo wrote 18 hours 30 min ago:
How do you get domain experts?
shalmanese wrote 1 day ago:
> The argument I'm seeing most is that most of us SWEs will become
obsolete once the agentic tools become good enough to allow domain
experts to fully iterate on solutions on their own.
Thatâs been the argument since the 5PL movement in the 80s. What
we discover is that domain expertise an articulation of domain
expertise into systems are two orthogonal skills that occasionally
develop in the same person but, in general, requires distinct
specialization.
elzbardico wrote 19 hours 1 min ago:
It never worked because a lot of times, domain experts are stuck
in their ways of doing things and the real innovation came from
engineers learning from domain experts but adding their
technically informed insights on the recipe to create novel ways
of working.
A Lotus 1-2-3 vibecoded by a Product Manager in 1979 would
probably had a hotkey for a calculator.
rsrsrs86 wrote 1 day ago:
Yes, 4GL and 5GL failed, but authoring Access applications should
be a breeze now.
godelski wrote 1 day ago:
> The argument I'm seeing most is that most of us SWEs will become
obsolete
That is equivalent to "replacing domain experts", or at least was
my intent. But language is ambiguous lol. I do think programmers
are domain experts. There are also different kinds of domain
experts but I very much doubt we'll get rid of SWEs.
Though my big concern right now is that we'll get rid of juniors
and maybe even mid levels. There's definitely a push for that and
incentives from an economic point of view. But it will be
disastrous for the tech industry if this happens. It kills the
pipeline. There can be no wizards without noobs. So we have a real
life tragedy of the commons situation staring us in the face. I'm
pretty sure we know what choices will be made, but I hope we can
recognize that there's going to need to be cooperation to solve
this least we all suffer.
j45 wrote 1 day ago:
Just because we can code something faster or cheaper doesn't increase
the odds it will be right.
falcor84 wrote 1 day ago:
Arguably it does, because being able to experience something gives
you much more insight into whether it's right or not - so being able
to iterate quickly many times, continuously updating your spec and
definition of done should help you get to the right solution. To be
clear, there is still effort involved, but the effort becomes more
about the critical evaluation rather than the how.
lelanthran wrote 22 hours 35 min ago:
Iteration only matters when the feedback is used to improve.
Your model doesn't improve. It can't.
mountainriver wrote 22 hours 17 min ago:
Your model can absolutely improve
thenaturalist wrote 21 hours 47 min ago:
How would that work out barring a complete retraining or human
in the loop evals?
baq wrote 22 hours 29 min ago:
The magic of test time inference is the harness can improve even
if the model is static. Every task outcome informs the harness.
thenaturalist wrote 21 hours 42 min ago:
> The magic
Hilarious that you start with that as TAO requires
- Continuous adaptation makes it challenging to track
performance changes and troubleshoot issues effectively.
- Advanced monitoring tools and sophisticated logging systems
become essential to identify and address issues promptly.
- Adaptive models could inadvertently reinforce biases present
in their initial training data or in ongoing feedback.
- Ethical oversight and regular audits are crucial to ensure
fairness, transparency, and accountability.
Not much magic in there if it requires good old human oversight
every step of the way, is there?
baq wrote 8 hours 36 min ago:
Goalposts wooshing by at maglev speed.
Of course it needs human supervision, see IBM 1979. Oversight
however doesnât mean the robots wait for approvals doing
r&d and thatâs where the magic is - the magic being robots
overseeing their training and improvement of their harnesses.
IOW only the ethics and deployment decisions need to be gated
by human decisions. The rest is just chugging along 1% a
month, 1% a week, 1% a dayâ¦
packetlost wrote 1 day ago:
But that's not the only problem.
To illustrate, I'll share what I'm working on now. My companies ops
guy vibe coded a bunch of scripts to manage deployments. On the
surface, they appear to do the correct thing. Except they don't.
The tag for the Docker image used is hardcoded in a yaml file and
doesn't get updated anywhere unless you do it manually. The docs
don't even mention half of the necessary scripts/commands or
implicit setup necessary for any of it to work in the first place,
much less the tags or how any of it actually works. There are two
completely different deployment strategies (direct to VM with
docker + GCP and a GKE-based K8s deploy). Neither fully work, and
only one has any documentation at all (and that documentation is
completely vibed, so has very low information density). The only
reason I'm able to use this pile of garbage at all is because I
already know how all of the independent pieces function and can
piece it together, but that's after wasting several hours of "why
the fuck aren't my changes having an effect." There are very, very
few lines of code that don't matter in well architected systems,
but many that don't in vibed systems. We already have huge problems
with overcomplicated crap made exclusively by humans, that's been
hard enough to manage.
Vibe coding consistently gives the illusion of progress by fixing
an immediate problem at the expense of piling on crap that obscures
what's actually going on and often breaks exiting functionality.
It's frankly not sustainable.
That being said, I've gotten some utility out of vibe coding tools,
but it mostly just saves me some mental effort of writing boring
shit that isn't interesting, innovative, or enjoyable, which is
like 20% of mental effort and 5% of my actual work. I'm not even
going to get started on the context switching costs. It makes my
ADHD feel happy but I'm confident I'm less productive because of
the secondary effects.
TeMPOraL wrote 7 hours 40 min ago:
I was trying to formulate my argument to disagree with the "cost
center" thinking in [1] , until I saw this comment. Now I feel
that 'alephnerd might be right after all.
> (...) ops (...) a bunch of scripts to manage deployments.
Devops is prime example of work to be minimized and ultimately
eliminated entirely by automation. Yes, it's a complex domain
rich in challenges and there's both art and skill to do it right,
but at the same time, it's also not the thing we want, just the
thing we have to do to get the thing we want, because we can't
yet do better.
HTML [1]: https://news.ycombinator.com/item?id=47107553
dchuk wrote 1 day ago:
If youâre able to articulate the issues this clearly, it would
take like an hour to âvibe codeâ away all of these issues.
Thatâs the actual superpower we all have now. If you know what
good software looks like, you can rough something out so fast,
then iterate and clean it up equally fast, and produce something
great an order of magnitude faster than just a few months ago.
A few times a week Iâm finding open source projects that either
have a bunch of old issues and pull requests, or unfinished
todos/roadmaps, and just blasting through all of that and leaving
a PR for the maintainer while I use the fork. All tested, all
clean best practice style code.
Donât complain about the outputs of these tools, use the tools
to produce good outputs.
judahmeek wrote 14 hours 23 min ago:
Care to actually show us any of these PRs?
bigbuppo wrote 18 hours 34 min ago:
How do we learn what a good output actually is?
reval wrote 1 day ago:
The post youâre r replying to gets this right- lead time is
everything. The fast you can iterate, the more likely that what
you are doing is correct.
Iâve had a similar experience to what youâre describing. We
are slower with AI⦠for now. Lean into it. Exploit the fact
that you can now iterate much faster. Solve smaller problems.
Solve them completely. Move on.
tombert wrote 1 day ago:
I dunno.
I really hate the expression "the new normal", because it sort of
smuggles in the assumption that there exists such thing as "normal".
It always felt like one of those truisms that people say to exploit
emotions like "in these trying times" or "no one wants to work
anymore".
But I really do think that vibe coding is the "new normal". These
tools are already extremely useful, to a point where I don't really
think we'll be able to go back. These tools are getting good enough
that it's getting to a point where you have to use them. This might
sound like I'm supportive of this, and I guess am to some extent, but I
find it to be exceedingly disappointing because writing software isn't
fun anymore.
One of my most upvoted comments on HN talks about how I don't enjoy
programming, but instead I enjoy problem solving. This was written
before I was aware of vibe coding stuff, and I think I was wrong. I
guess I actually did enjoy the process of writing the code, instead of
just delegating my work to a virtual intern while I just watch the AI
do the fun stuff.
A very small part of me is kind of hoping that once AI has to be priced
at "not losing money on every call" levels that I'll be forced to
actually think about this stuff again.
syndacks wrote 1 day ago:
I largely agree with you. And, given your points about ânot going
backâ â how do you propose interviewing SWEs?
tombert wrote 1 day ago:
I have thought about this a lot, and I have no idea. I work for an
"AI-first" company, and we're kind of required to use AI stuff as
often as we can, so I make very liberal use of Codex, but I've been
shielded from the interview process thus far.
I think I would still kind of ask the same questions, though maybe
a bit more conceptual. Like, for example, I might see if I could
get someone to explain how to build something, and then ask them
about data structures that might be useful (e.g. removing a lock by
making an append-only structure). I find that Codex will generally
generate something that "works" but without an understanding data
structures and algorithms, its implementation will still be
somewhat sub-optimal, meaning that understanding the fundamentals
has value, at least for now.
atomic128 wrote 1 day ago:
Sounds like a16z has some rapidly depreciating software equity they
want to sell you.
Or maybe they own the debt.
Listen to some of the Marc Andreessen interviews promoting
cryptocurrency in 2021.
Do that and you will never listen to him or his associates again.
rsrsrs86 wrote 1 day ago:
They donât make money by being right, they make money by exposing
LPs to risk. Zero commitment to insight. Intellectual production goes
only so far as to attract funding.
bigbuppo wrote 18 hours 24 min ago:
Also... they don't make money by promoting things that are good
ideas that make sense. That's why every lucky billionaire tech bro
that gets into VC ultimately invests in smart toilets. Ultimately,
they just keep putting money into each slot machine they can find
until one of them pays out a jackpot. Eventually one of them will
make up for all the other losses.
alephnerd wrote 1 day ago:
Both AI Fanatics and AI Luddites need to touch grass.
We work in Software ENGINEERING. Engineering is all about what tools
makes sense to solve a specific problem. In some cases, AI tools do
show immediate business value (eg. TTS for SDR) and in other cases this
is less obvious.
This is all the more reason why learning about AI/ML fundamentals is
critical in the same way understanding computer architecture, systems
programming, algorithms, and design principles are critical to being a
SWE, because then you can make a data-driven judgment on whether an
approach works or not.
Given the number of throwaway accounts that commented, it clearly
struck a nerve.
rsrsrs86 wrote 1 day ago:
The irony is, AI coding only works after and if you put a lot of work
on engineering, like creating a factory.
alephnerd wrote 1 day ago:
There is a lot of work that goes on before even reaching the point
to write code.
For example, being able to vibecode a UI wireframe instead of being
blocked for 2 sprints by your UI/UX team or templating an alpha to
gauge customer interest in 1 week instead of 1 quarter is a massive
operational improvement.
Of course these aren't completed products, but customers in most
cases can accept such performance in the short-to-medium term or if
it is part of an alpha.
This is why I keep repeating ad nauseum that most decisionmakers
don't expect AI to replace jobs. The reality is, professional
software engineering is about translating business requirements
into tangible products.
It's not the codebase that matters in most cases - it's the
requirements and outcomes that do. Like you can refactor and
prettify your codebase all you want, but if it isn't directly
driving customer revenue or value, then that time could be better
spent elsewhere. It's the usecase that your product enables which
is why they are purchasing your product.
andrekandre wrote 17 hours 41 min ago:
> The reality is, professional software engineering is about
translating business requirements into tangible products.
and most requirements (ime anyways) are usually barely half-baked
and incomplete causing re-testing and re-work over and over which
are the real bottlenecks...
ai/vibe coding may make that cycle faster but idk it might
actually make things worse long-term because now the race course
has rubber walls and there is less penalty just bouncing left and
right instead of smoothly speeding down the course to the next
destination...
alephnerd wrote 16 hours 23 min ago:
> most requirements (ime anyways) are usually barely half-baked
and incomplete causing re-testing and re-work over and over
which are the real bottlenecks...
> ai/vibe coding may make that cycle faster but idk it might
actually make things worse long-term
By making the cycle faster it reduces the impact while also
highlighting issues within the process - there are too many
incompetent PMs and SWEs.
Additionally, in a lot of cases a PM won't tell you that you
might actually be working on checkbox work that someone needs
to do but doesn't justify an entire group of 2-3 SWEs because
then you obviously won't do the work. This kind of work is ripe
for being automated away via vibecoding or agents.
A good reference for this is how close is the feature you are
working on directly aligned with revenue generation - if your
feature cannot be directly monetized as it's own SKU or as a
part of a bundle, you are working on a cost center, and cost
centers are what we want to reduce either by automating them
away, offshoring them, or doing a mix of both.
The reality is that perfection is the enemy of good, and this
requires both Engineers and PMs working together to negotiate
on requirements.
If this does not happen at your workplace, you are either
working on a cost center feature that doesn't matter, you are
viewed as a less relevant employee, or you are working at a bad
employer. Either way it is best for you career to leave.
In my experience, if you've actually chatted with executive
leadership teams in most F500s, when they are thinking about
"AI Safety" they are actually thinking about standard
cybersecurity guardrails like zero-trust, identity, authn/z,
and API security with an added layer of SLAs around
deterministic output.
But by being able to constantly interate and experiment,
companies can release features and products faster with better
margins - getting a V1 out the door in 1 sprint and spending
the rest of the quarter adding guardrails is significantly
cheaper than spending 1 quarter building V2 and then spending 1
more quarter building the same guardrails anyhow.
Basically, we're returning to the same norms in the software
industry that we had pre-COVID around building for pragmatism
instead of for perfection. I saw a severe degradation in the
quality of SWEs during and after COVID (too many code monkeys,
not enough engineers/architects).
rsrsrs86 wrote 22 hours 20 min ago:
As a researcher in formal methods, I totally get you
7777777phil wrote 1 day ago:
Even a16z is walking this back now. I wrote about why the âvibe code
everythingâ thesis doesnât hold up in two recent pieces:
(1) [1] (2) [2] Acharyaâs framing is different from mine (heâs
talking book on software stocks) but the conclusion is the same: the
âinnovation bazookaâ pointed at rebuilding payroll is a bad
allocation of resources. Benedict Evans called me out on LinkedIn for
this ( [3] ) take, which I take as a sign the argument is landing..
HTML [1]: https://philippdubach.com/posts/the-saaspocalypse-paradox/
HTML [2]: https://philippdubach.com/posts/the-impossible-backhand/
HTML [3]: https://philippdubach.com/posts/is-ai-really-eating-the-world-...
Derbasti wrote 8 hours 25 min ago:
How is AI code generation a "innovation bazooka"? Last time I
checked, innovation required creativity, context, and insight. Not
really fast boilerplate generators.
rvz wrote 8 hours 27 min ago:
> Even a16z is walking this back now. I wrote about why the âvibe
code everythingâ thesis doesnât hold up in two recent pieces:
The next one a16z should walk back on is "AGI" given that they have
just admitted that "vibe code everything" was just a sign of them
being consumed by the hype.
ricardobayes wrote 9 hours 22 min ago:
All that is correct and well-written, however I fear in most cases
"good enough" will be good enough for Business. If Business can do
something to 80% the same but with a large cost cutting they likely
go for it, we have seen this with shrinkflation (reduced portion
sizes for the same price), to using cheaper ingredients to
practically everything that is not a knowledge-heavy industry. The
big change is now the "shrinkflation" is coming to knowledge domains
too, which will likely lower the quality of healthcare, software etc.
AI being a next-token predictor will produce cheap and average
products, we will likely see some (most?) software become a
commodity, that goes through the same product development and
"manufacturing" as a breakfast cereal. Made in a "dark factory",
24/7, with little supervision.
However I think down the line we will see many industries popping up
that are like "organic food", "mechanical watchmaking" that provide
above the usual slop that large businesses produce.
noosphr wrote 11 hours 10 min ago:
The best take I've seen on the whole `AI will replace all devs' is a
way for big tech to walk back the disastrous over hiring they did
around Covid without getting slaughtered in the stock market.
somenameforme wrote 7 hours 9 min ago:
I don't understand this take. The market tends to positively value
layoffs.
rwmj wrote 6 hours 25 min ago:
The market doubly rewards companies that lay off workers and have
a story about how they're automating everything with AI, even if
that story is just a story.
mrwh wrote 11 hours 40 min ago:
> investors are simultaneously punishing hyperscaler stocks because
AI capex might generate weak returns, while destroying software
stocks because AI adoption will be so pervasive it renders all
existing software obsolete. Both cannot hold simultaneously.
I don't understand this point. Can't it be possible that the ultimate
effect is to devalue, hugely, software? As in it can totally both be
true that AI capex has weak returns and at the same time most SaaS
companies go bankrupt. To take an analogy: if ever we manage to
successfully mine asteroids, and find some vast quantity of platinum,
it could both be true that every existing platinum miner loses their
shirt, and also that the value of platinum sinks so far that the
asteroid mining company cannot cover its costs.
re-thc wrote 10 hours 15 min ago:
SaaS companies were just overvalued. They had crazy multiples. Not
even an AI thing.
zozbot234 wrote 8 hours 41 min ago:
It is an AI thing though. AI makes it far easier to create
bespoke software targeted at narrow specialized domains, which is
the mainstay of modern SaaS. We'll probably see "proper" FLOSS
expand into these sectors too, such that the software won't be
simply a matter of internal vibecoding by any single business -
instead, the maintenance work will be shared.
re-thc wrote 4 hours 15 min ago:
I don't see AI easily creating a DataDog. You need it for
reliability for example.
You can always also deploy open source since forever. What
happens when it randomly drops logs or changes the text? If you
get an alert and it is noise it starts becoming pointless.
And yet these type of stocks were at 50-100x earnings etc.
rwmj wrote 6 hours 26 min ago:
AI makes it easier to create something, but that thing is not
enterprise software with support contracts and conformance to
mandatory regulations and 4 hour bug turnarounds and real
people on the end of the phone who understand how it works.
Sometimes I just wonder at how HN has no idea what enterprise
software involves.
zozbot234 wrote 5 hours 42 min ago:
With this kind of niche sector-specific offering, creating a
prototype that works properly for what the industry needs is
the main hurdle. The rest is just the same sort of ordinary
software engineering work that applies to any FLOSS project
already - and we know that FLOSS (with optional 3rd party
support covering "enterprise" needs) is quite viable.
selridge wrote 1 day ago:
> Benedict Evans called me out on LinkedIn for this take, which I
take as a sign the argument is landing.
Excellent. And correct lol.
crsv wrote 15 hours 34 min ago:
The fact that this is getting downvoted gave me a hearty chuckle.
Never change, HN.
duzer65657 wrote 1 day ago:
>> Anish Acharya says it is not worth it to use AI-assisted coding for
all business functions. AI should focus on core business development,
not rebuilding enterprise software.
I don't even know what this means, but my take: we should stop
listening to VCs (especially those like A16Z) who have an obvious
vested interest that doesn't match the rest of society. Granting these
people an audience is totally unwarranted; nobody but other tech bros
said "we will vibe code everything" in the first place. Best case
scenario: they all go to the same exclusive conference, get the branded
conference technical vest and that's were the asteroid hits.
sanction8 wrote 1 day ago:
a16z talking again?
This is your regular reminder that
1) a16z is one the largest backers of LLMs
2) They named one of the two authors of the Fascist Manifesto their
patron saint
3) AI systems are built to function in ways that degrade and are likely
to destroy our crucial civic institutions. (Quoted from
Professor Woodrow Hartzog "How AI Destroys Institutions"). Or to put it
another way, being plausible but slightly wrong and un-auditableâat
scaleâis the killer feature of LLMs and this combination of
properties makes it an essentially fascist technology meaning it is
well suited to centralizing authority, eliminating checks on that
authority and advancing an anti-science agenda (quoted from the A
plausible, scalable and slightly wrong black box: why large language
models are a fascist technology that cannot be redeemed post).
nylonstrung wrote 1 day ago:
This wasn't a16z monolithically speaking as a firm, it was Anish
Acharya talking on a podcast.
Seems like he's focused on fintech and not involved in many of their
LLM investments
arjie wrote 1 day ago:
I will not claim to be an expert historian but one general belief I
have is that nomenclature undergoes semantic migration over a
century. So for the sake of conciseness I will quote the first demand
of each portion of the Fascist Manifesto. This isn't to obscure,
because it is in Wikipedia[0] and translated in English on EN
Wikipedia[1], but so I can share a sample of whether this is
something we can relate to our present day political orientation.
Hopefully it will inform what you believe "author of the Fascist
Manifesto" to imply:
> ...
> For this WE WANT:
> On the political problem:
> Universal suffrage by regional list voting, with proportional
representation, voting and eligibility for women.
> ...
> On the social problem:
> WE WANT:
> The prompt enactment of a state law enshrining the legal eight-hour
workday for all jobs.
> ...
> On the military issue:
> WE WANT:
> The establishment of a national militia with brief educational
services and exclusively defensive duty.
> ...
> On the financial problem:
> WE WANT:
> A strong extraordinary tax on capital of a progressive nature,
having the form of true PARTIAL EXPROPRIATION of all wealth.
> ...
0: [1] 1:
HTML [1]: https://it.wikipedia.org/wiki/Programma_di_San_Sepolcro#Test...
HTML [2]: https://en.wikipedia.org/wiki/Fascist_Manifesto#Text
decidu0us9034 wrote 15 hours 55 min ago:
Sure. They're making a strong claim, but I think they mean "author
of the Fascist Manifesto" as shorthand to say Marinetti was an
ardent supporter of fascism and Mussolini. His support continued
throughout the 30's and 40's, even after the Pact of Steel and the
Racial Laws etc, even volunteering to go to the Eastern Front. I
think we can say with the benefit of hindsight that the fascists'
attempts to ingratiate themselves to the worker's movement were
sort of ancilliary to the whole political/ideological project... I
mean I'd hope any student of history agrees with that...
holden_nelson wrote 1 day ago:
Iâm not particularly political and am also not a historian but I
donât think itâs necessarily correct to equate the literal text
of the manifesto with the principles and practices of fascism.
The message of universal suffrage vs. that of preventing an out
group from âstealingâ an election are not far apart
semantically. Same with workers rights - in practice the worker
protection laws that were passed in Italy at this time were so full
of loopholes and qualifications that ultimately the workers do not
gain power in that system.
It is this fair, in my view, to question the spirit of the
manifesto in the first place.
arjie wrote 1 day ago:
I suppose we should, in being intellectually consistent, take the
appropriate position that 8 hours / day and a wealth tax are
fascist principles.
DIR <- back to front page