o_nate 2 days ago

There's an old game in the investing world of trying to time the top of a stock bubble by picking out the most breathless headlines and magazine covers, looking for statements such as the famous 1929 quote from two weeks before the market crash: "Stock prices have reached what looks like a permanently high plateau." By that metric, we may be getting close to the top of the AI hype bubble, with headlines such as the one I saw recently in the NY Times for an Ezra Klein column: "The Government Knows A.G.I. Is Coming".

  • cyberlurker 2 days ago

    Listening to his podcast on the topic was so disappointing. I think Ezra is a smart guy, but he doesn’t understand the field and the entire premise of the long discussion was that LLMs are going to get us to AGI.

    • mzronek 2 days ago

      He also casually dropped that he talked to people at firms with "high amount of coding", who told him that by the end of this or next year "most code will not be written by humans".

      Yeah, okay. I work each day with Copilot etc and the practical value is there, but there are so many steps missing for this statement to be true, that I highly doubt it.

      My case is, wouldn't we already see the tools that are at least getting close to this goal? I can't believe that (or AGI in fact) to be a big bang release. It looks more like baby steps for now.

      • thfuran 2 days ago

        I mostly write java, and I rarely even look at the byte code, let alone the output of C2. I guess AGI took my job.

      • stemlord a day ago

        >most code will not be written by humans

        That cant even be considered a useful metric seeing as I spend similar time reviewing gpt written code

    • edanm a day ago

      > but he doesn’t understand the field and the entire premise of the long discussion was that LLMs are going to get us to AGI.

      You might disagree with this assessment, but it doesn't show that he doesn't understand the field, since many of the people in the field also think this. At least to the definition of AGI he used - AI that can replace most of the economic work done by humans.

      • filoleg 15 hours ago

        > it doesn't show that he doesn't understand the field, since many of the people in the field also think this

        Have you ever interviewed people for software positions?

        After being a part of the interview loop myself for a while, I am confident that “many of the people in the field” also don’t understand it. Not even talking about the leetcode gauntlet or complex systems design at all, just pure fundamentals and very basics. Not even talking about the larger scale picture of the business .

        With that in mind, the fact that “many of the people in the field” he interviewed at “places with high amount of coding” agreed with him doesn’t say much.

        • edanm 14 hours ago

          I've been interviewing people for fifteen years, yes.

          And while it's true that many people don't understand much, I don't think this applies to many people working at frontier model or AI companies. Especially not the high level people Ezra said he talked with.

          • filoleg 13 hours ago

            Fair. I am not sure if people working at AI companies is a good measurement for this, as they have a personal stake in their claims.

            Maybe I am just morally corrupt enough to entertain this idea, but people working at AI companies overexaggerating and overhyping the impact of their work sounds like an obvious move.

            • edanm 12 hours ago

              I doubt it's that. But he also talked to people outside of these ai companies. The person he interviewed on the podcast was in the Biden administration.

    • DebtDeflation 2 days ago

      I stop reading/listening as soon as AGI or Superintelligence is mentioned.

  • gh0stcat 2 days ago

    The AGI piece from Ezra was frustrating, to the extent that after listening to him talk about technology in this podcast, it made me question the quality of his knowledge in domains I know far less about.

  • kurthr 2 days ago

    [flagged]

    • falcor84 2 days ago

      For what it's worth, intelligent sexual robots will likely be massive in the next decade

    • layer8 2 days ago

      Artificial Glitchy Intelligence

breckenedge 2 days ago

This article is way too light on the details. Does it conflate Nvidia’s stock price with interest in generative AI? New use cases for it are arriving every month. 9 months ago I was amazed to use Cursor, and was leading getting my team to switch to it. 3 months ago it was that Cursor had added agents and trying to again demonstrate their benefits to my colleagues. Now I’m using Cline + Claude 3.7 and more productive than I’ve ever been — and I haven’t even touched MCPs yet.

Definitely not yet peaked IMO. However yea, I don’t see it fully replacing developers in the next 1-2 years — still gets caught in loops way too often and makes silly mistakes.

  • bwestergard 2 days ago

    Thanks for your comment.

    I am arguing the hype has peaked, and that there will likely be a pull back in investment in the next year. This is not to say the technology has "peaked", which I'm not sure one could even define precisely.

    Important technologies emerged during each past "AI summer", and did not disappear during "AI winter". LISP is more popular than ever, despite the collapse of hype in symbolic reasoning AI decades ago.

    As I mention in the OP, I think productivity enhancing tools for developers are one of the LLM applications that is here to stay. If I didn't think so, I wouldn't be concerned about the impact on skill development among developers.

    https://en.wikipedia.org/wiki/AI_winter

    • alexpotato 2 days ago

      Counter point:

      The decline in NVDA stock price may also be due to newer models that require fewer GPUs, specifically from NVDA.

      In other words, the demand may stay the same but if fewer GPUs in general or non-NVDA GPUs specifically get you to the same point performance-wise then the supply just went up.

      • dowager_dan99 2 days ago

        this seems realistic with consideration of a lot of previous tech advancements. We focus on the disruption, but meanwhile the efficiency improvements follow closely (and often more easily) on the coat-tails. We should definitely be looking for more efficient production of what's already been proven, vs. the next big step happening immediately.

    • Yoric 2 days ago

      If my memory serves, both SQL and HTML are indirect fallouts from an AI summer, too.

  • Etheryte 2 days ago

    I would say the hype has started to fall off, as it's becoming increasingly obvious that AGI is not around the corner, but meanwhile practical use cases keep getting better and better. The more we understand the strengths and weaknesses, the better we can exploit them, and even if the models themselves have hit the scaling wall, I think tooling around them is far from done.

  • maxglute 2 days ago

    Peaking hype = investers think generative AI may replace billions instead of trillions of economic activity in the return window they're looking at.

    • breckenedge 2 days ago

      I believe it continues to get faster and better from here. We haven’t even scratched the surface of deployed capabilities yet. Sure it might not be a path to AGI, but it could still replace many people in many roles. It may not be a specific company’s silicon that wins, but generative AI is just getting started. Yes, Claude 3.7 and ChatGPT 4.5 are not as groundbreaking as previous iterations, but there are so many untouched areas ripe for investment.

  • MangoCoffee 2 days ago

    AI is just a tool. Its not going to replace human coder anytime soon.

    • beernet 16 hours ago

      >> Its not going to replace human coder anytime soon.

      Not sure if this arrogant POV is very sustainable. This technology is replacing human coders as we speak, right now. Just not all of them yet, of course.

    • giantrobot 2 days ago

      That's not what the C-suite is telling their boards/investors as they conduct layoffs to goose margins at the end of the quarter. So a lot of people are having their lives and livelihoods upended because of unrestrained hype.

  • yubblegum 2 days ago

    > Nvidia’s stock price

    Market may be pricing in possible takeover of Taiwan by China.

daedrdev 2 days ago

Stocks are down because the president of the US has entered a costly trade war, actually.

  • tenpies 2 days ago

    What do you make of something like Reddit (RDDT, down -15% at this moment)?

    It's unaffected by tariffs, but its insane valuation is driven by the narrative that Reddit posts can be used to train AI. Without that narrative, you have a semi-toxic collection of forums and the valuation would probably be somewhere in the millions at best, not the current $20 BB.

    • loandbehold 2 days ago

      Reddit is an ad-driven business. Ad revenues decline when economy shrinks.

    • daedrdev a day ago

      Loss-making businesses money can be expected to fail more often when a recession occurs, which looks increasingly likely. After all if they can't make any money today how will they make a profit if add spend is down by 30%?

    • greener_grass 2 days ago

      The companies that might acquire Reddit are affected by tariffs.

    • aetherson 2 days ago

      I mean, not to say that you might not have some explanatory power here, but the market is complex and difficult to untangle, and at least some analysts are predicting recession which will certainly have effects on Reddit even if it's not directly affected by tariffs. We can all cherry-pick individual stocks.

    • mixmastamyk 2 days ago

      When a correction happens, everyone with short-term funds pulls them out. Doesn't matter if the issue has a direct connection to the stock or "makes sense" at all.

  • bwestergard 2 days ago

    No disagreement from me there. But for the year to date the Nasdaq composite is down less than 4%, whereas NVIDIA is down 20%.

    • YetAnotherNick 2 days ago

      Nvidia is up 24% in last 1 year compared to <10% for Nasdaq or S&P. Cherry picking the point to compare to is bad.

      • enragedcacti 2 days ago

        It's not cherry picking to use recent data to dispute a claim about recent events. YTD might not be the best choice but its better than 1Y. 1M or Feb 20th-now give similar though not as quite as extreme differences (∨7-8% SPY vs ∨20-23% NVDA).

        • YetAnotherNick 2 days ago

          Feb 3 to current is -8% for Nvidia and -10% for Nasdaq. Arbitrary point will give arbitrary results. Also Amazon which is least associated with gen AI in big tech is down 20% at that time.

          • enragedcacti a day ago

            They aren't arbitrary. I picked the dates I did because they coincide with the widespread narrative of selloffs from tariffs and they are decent bounds on a period of stability for both.

            Amazon is especially vulnerable to tariffs. To be clear, I'm not really staking out a hard position on gen AI being the one true cause here, I just don't find "it's just part of the overall tariff driven market decline" to be very convincing.

      • mlinhares 2 days ago

        I'm definitely betting the AI bubble is going to burst but NVIDIA isn't the company that will go down with it, they actually have a real business that is not just AI hype behind it. The insane valuation they have now might not hold but I doubt they are at any risk of disappearing.

      • danielcampos93 2 days ago

        they also 10xed their revenue. 24 seems low for someone that pulled that off.

    • dwedge 2 days ago

      Because of deepseek right? Saying it's the end of AI because of another AI is difficult to swallow

  • SubiculumCode 2 days ago

    A highly unpredictable trade war, as well as rattling every international ally, convincing nations across the world to choose other military platforms than ours because Trump could just turn it off at a whim, increasing risk of political instability leading nations to turn to other nations for investment. Our economy is a ticking time bomb under Trump.

    • deadbabe 2 days ago

      The stock market is not the economy. International allies purchasing decisions is not the economy.

      • daedrdev a day ago

        Indeed, it's the terrible outlook on the economy that is leading the stock market down. The US is barreling towards recession, the continued flip flop on tariffs has cause business investment and projects to be frozen while everyone waits for certainty since they can't risk getting nuked by tariffs, the rapid and deep decline in trust of US made goods will ruin our exports and military base.

      • cyberlurker 2 days ago

        Which part of the economy are you taking inspiration from?

hnthrow90348765 2 days ago

>We may look back in a decade and lament how self-serving and short-sighted employers stopped hiring less experienced workers, denied them the opportunity to learn by doing, and thereby limited the future supply of experience developers.

I think bootcamps will bloom again and companies will hire people from there. The bootcamp pipeline is way faster than 4 year degrees and easy to spin up if the industry decides the dev pipeline needs more juniors. Most businesses don't need CompSci degrees for the implementation work because it's mostly CRUD apps, so the degree is often a signal of intellect.

This model has a few advantages to employers (provided the bootcamps aren't being predatory) like ISAs and referrals. Bootcamp reputations probably need some work though.

What I think will go away is the bootstraps idea that you can self-teach and do projects by yourself and cold-apply to junior positions and expect an interview on merit alone. You'll need to network to get an 'in' at a company, but that can be slow. Or do visible open source work which is also slow.

  • ike2792 2 days ago

    The problem with bootcamps right now is that they provide no predictive value. If I hire someone with a CS degree from, say, Stanford that has 2-3 internships and a few semester-long projects under their belt, that gives me reasonable confidence as a manager that the person has what it takes to solve problems with software and communicate well. Bootcamp candidate resumes are all basically identical and the projects are so heavily coached and curated that it is difficult to figure out how much the candidate actually knows.

    • hnthrow90348765 2 days ago

      In this hypothetical scenario from the article, it's been years since employers stopped hiring juniors, so depending on when they graduate, there's probably employment gaps or unrelated work to factor into your decision as well.

      And after this period, when companies start hiring juniors again, the amount of Stanford-like graduates may still be small because few wanted to go into CS. You have like a 2-4 year wait for people deciding to go into CS again.

      If you are FAANG, you can throw money at the problem to get the best, but ordinary businesses probably won't be able to get Stanford grads during a junior-hiring boom.

    • aggie 2 days ago

      Most people do not have elite resumes and most people are not hiring people with elite resumes. There's plenty of uncertainty in hiring in general, and that being the case with bootcamps isn't much different than a typical resume with a 4-year degree.

      • shortstuffsushi 2 days ago

        I agree with the position that most people are not coming from "elite" schools, as someone who hires in the midwest. I still much prefer someone with a four year school (and, as the other poster mentioned, internships) to a bootcamp. I have had one bootcamp graduate of five total that was at a useful starting skill level, compared to probably 90% (don't have a count for this one) "base useful" skill out of college.

      • ike2792 2 days ago

        I used Stanford as an example, but plenty of companies focus on CS grads from big state schools like Purdue, Michigan, Ohio State, etc that have similar resumes. In my experience, graduates from 4-year CS programs with some internship experience vastly outperform bootcamp grads as a group. I have hired and worked with some outstanding bootcamp grads, but you would never know that they stood out before actually interviewing them since most bootcamps have standard resume templates they tell their grads to use. In an era of 200+ applicants/day for every junior engineering role, you need to be able to tell that someone probably has what it takes to succeed after a 30 second resume scan.

      • seanhunter a day ago

        As a hiring manager, I can say that the quality of candidates from any sort of degree program versus any sort of bootcamp is really chalk and cheese. It's not a question of only hiring people with elite resumes it's the difference between someone who has had to think deeply about a subject and learn how to solve hard problems versus someone who has had some rote knowledge about a specific tech stack drilled into them by repetition and the minute they get outside their (narrow) zone they really struggle.

        That's not to say I never hire bootcamp candidates, it is that going to a bootcamp is not really a positive in my assessment.

      • vunderba 2 days ago

        You would be wrong. The difference between a code camper (3-6 months) vs a bachelor's in compsci is the difference between a paramedic and a doctor. If all you need is a CRUD dev who can make use of a JS framework and a CSS library then it might be sufficient, but the rigorous fundamentals underpinning a compsci degree (discrete maths, data structures, algorithms, etc.) makes for a far more knowledgable and solid engineer.

  • vunderba 2 days ago

    Perhaps but I have my doubts. With new jobs receiving as much as hundreds of applicants on a single posting, a university degree can significantly help in whittling down the list.

    Potential employers can (and do) verify your educational background such as a degree from an accredited university. Even if you had a certificate from a "legitimate code camp" (though I'd argue they're about as valuable as an online TEFL cert) - they have no way to verify it.

jsight 2 days ago

If the average person has still not ridden a self driving car, assembled by figure 02 style robots, though a drive thru with AI ordering, then we aren't even close to seeing the real peak here.

>100x growth ahead for sure.

  • hylaride 2 days ago

    Most people (in the world) hadn't been on the internet in 2000 when the dot-com crash happened. Barely half the US population was even online at that point. We're probably nowhere near the peak of AI ability or usage, but that doesn't mean there hasn't been a lot of mal-investment or that things can't commodify.

    Huge amounts internet growth still happened after the 2000 crash, but networking gear and fiber optic networking became a commodity play, meaning the ROI shifted. The companies that survived ended up piggybacking the over-investment on the cheap, including Amazon and Google.

    Even going way back, the real productive growth of the American railroads didn't happen until after the panic of 1873 after overbuilding was rationalized.

    • jsight 2 days ago

      Agreed, and good reminder that a lot of people here have probably only learned about the dot-com crash from comments and history. I remember when Cisco had a P/E in the hundreds during that era. People have forgotten just how stratospheric some of those valuations were back then.

      I hate to say "this time is different", but it really doesn't feel the same way, at least in public equities. nvidia has a high stock price, but they also have a P/E of ~36. Meanwhile, modern Cisco is ~27.

      There might be some parallels though. OpenAI as modern Netscape might not be that far off.

      • rchaud 16 hours ago

        Cisco makes network equipment (selling shovels in a gold rush), so a high valuation at the time wasn't out of the ordinary, considering Cisco was actually profitable during the dotcom boom, when many newly public companies were 'pre-revenue'. I think of it as similar to Nvidia, who also sells shovels.

      • freejazz 17 hours ago

        Yeah... the public equities.

  • rco8786 2 days ago

    Author not claiming AI has peaked. Only claiming that the hype as peaked.

    • jsight 2 days ago

      I came close to adding a paragraph about that. Inevitably someone would argue that there's a difference between hype peaking and 100x future growth.

      Regardless of whether that distinction is useful, the author makes some fairly specific claims about inelasticity of demand, and seems only to lack confidence regarding the timing of Nvidia's fall not its inevitability.

      I disagree with all of that.

  • vessenes 2 days ago

    What’s crazy is we are very close to this right now. Esp if you count industrial robots - byd is almost totally autonomous production, I believe Tesla is close as well.

    • jsight a day ago

      Indeed. TBH, I also wasn't thinking about Boston Dynamics. Their Atlas robot looks really impressive. It is easy to imagine an almost 100% automated factory in the not-too-distant future.

  • khrbrt 2 days ago

    None of those cases are "generative" AI.

    • jsight 2 days ago

      We can debate the definition of generate, but it doesn't seem important. A key claim was that nvidia's stock price decline is inevitable, with the only question being timing. Meanwhile all of these other use cases will drive demand anyway.

      But honestly, even chat apps are nowhere near their peak. Hallucinations and fine tuning issues are holding that segment back. There's a lot of growth potential there too as confidence and training help to increase adoption.

qoez 2 days ago

One thing I'd love to short is the idea that we're going to have a second AI winter. Lots of people predict it but I believe this time is actually a real step function innovation (and last time was caused by it being a very distant research project and money dried out because competition with the much more lucrative internet which was growing at the same time).

cenobyte 2 days ago

Anyone who thinks the Hype has peaked is obviously too young to remember the dotcom bubble.

It will get so much worse before it starts to fade.

Infecting every commercial, movie plot, and article that you read.

I can still here the Yahoo yodel in my head from radio and TV commercials.

siliconc0w 2 days ago

IMO Grok and 4.5 show the we've reached the end of reasonable pre-training scaling. We'll see how far we can get with RL in post-training but I suspect we're pretty close to maxed there and will start seeing diminishing returns. The rest is just inference efficiency, porting the gains to smaller models, and building the right app-layer infrastructure to take advantage of the technology.

I do think we're overbuilding on Nvidia and the CUDA moat isn't as big as people think, inference workloads will dominate, and purpose-built inference accelerators will be preferred in the next hardware-cycle.

ypeterholmes 2 days ago

So Deep Research and the latest reasoning models don't deserve mention here? I wish there was accountability on the internet, so that people posting stuff like this can be held accountable a year from now.

skepticATX 2 days ago

The industry only has themselves to blame. When you promise literal utopia and inevitably don’t deliver, you can’t be surprised by what happens next.

_cs2017_ 2 days ago

Skeptical as I am about the generative AI, the quality of this particular article (in terms of evidence provided, logic, insights, etc) is substantially lower than ChatGPT / Gemini DeepResearch can generate. If I was grading, I'd rate an average (unedited) AI DeepResearch report at 3/10, and the headline article at 1/10.

zekenie 2 days ago

Idk I used Claude Code recently and revised all my estimates. Even if the models stop getting better today I think every product has years of runway before they incorporate these things effectively.

gdubs 2 days ago

Something I've been saying for two years now is that AI is the most over-hyped and the most under-hyped technology, simultaneously.

On the one hand it has been two years of "x is cooked because this week y came out..." and on the other hand, people who seem to have formed their opinions based on ChatGPT 3.5 and have never checked in again on the state-of-the-art LLMs.

In the same time period, social media has done its thing of splitting people into camps on the matter. So, people – broadly speaking, no not you wise HN reader – are either in the "AI is theft and slop" camp or the "AI will bring about infinite prosperity" camp.

Reality is way more nuanced, as usual. There are incredible things you can do today with AI that would have seemed impossible twenty years ago. I can quickly make some python script that solves a real-world problem for me, by giving fuzzy instructions to a computer. I can bounce ideas off of an LLM and, even if it's not always 'correct', it's still a valuable rubber-ducky.

If you look at the pace of development – compare MidJourney images from a few years ago to the relatively stable generative video clips being created today – it's really hard to say with a straight face that things aren't progressing at a dizzying rate.

I can kind of stand in between these two extreme points of view, and paradigm-shift myself into them for a moment. It's not surprising that creative people who have been promised a wonderful world from technology are skeptical – lots of broken promises and regressions from big tech over the past couple of decades. Also unclear why suddenly society would become redistributive when nobody has to work anymore, when the trend has been a concentration of wealth in the hands of the people who own the algorithms.

On the other hand, there is a lot of drudgery in modern society. There's a lot of evolution in our brains that's biased to roaming around picking berries and playing music and dancing with our little bands. Sitting in traffic to go sit in a small phone both and review spreadsheets is something a lot of people would happily outsource to an AI.

The bottom line – if there is one – is that uncertainty and risk are also huge opportunities. But, it's really hard for anyone to say where all of this is actually headed.

I come back to the simultaneity of over-hyped/under-hyped.

  • DanHulton 2 days ago

    I guess my biggest worry is that the ones doing the outsourcing of all this "drudgery" are unlikely to be the workers who are currently being paid to do the work, but the owners who no longer have to pay them.

    The rest of society and our economy doesn't seem to be adjusting to hundreds of thousands or millions of people being "outsourced", so it's not likely there will be a lot of playing music and dancing for these people, though you may be more prescient than either of us are comfortable with, with the "berry picking" prediction...

    • gdubs 2 days ago

      Right - this is a big point of what I'm saying which is that people are skeptical that "this time the automation will work out really well for everyone, trust us!"

      Depends on your timescale, I suppose. But if AI is really about to take over everyone's work then we need to be having a much bigger discussion about what they imagine the billions of people on this planet will be doing in that scenario, and what kind of economic life they'll be living.

OldGreenYodaGPT 2 days ago

Peaked? Nah, it's barely started. Wait till we get decent SWE agents reliably writing good code, probably later this year or next. Once AI moves beyond simple boilerplate, the productivity boost will be huge. Too soon to call hype when we've barely scratched the surface.

  • bluefirebrand 2 days ago

    I asked copilot to write me a Typescript function today

    I had two defined types, both with the exact same field names. The only difference is one has field names written in snake_case, and the other had names written in camelCase. Otherwise the exact same

    I wanted a function that would take an object of the snake_case type, and output an object of the camelCase type. The object only had about 10 fields

    It missed about half of the fields, and inserted fields that didn't even exist on either object

    You cannot convince me that AI is anywhere near to this level if it cannot even generate a function that can convert "is_enabled" to "isEnabled" inside an object

    Every time I try this stuff I'm so disappointed with it. It makes me think anyone who is hyped about it is an absolute fraud that does not know at all what they are doing

    • parliament32 a day ago

      This mirrors my experience, and I've tried nearly every popular model available, numerous times over the last year or so. I'm having trouble understanding where all the hype is coming from -- is it literally just all marketing? I refuse to believe any half-decent SWE can be bullish on AI-for-code after novelty of the first week wears off.

    • leosanchez 2 days ago

      Typescript is the language it probably has most data on...

    • knowaveragejoe 2 days ago

      You get out what you put in. Of course if you provide one sentence of context(and some implicit context from the environment) you aren't going to get magical result.

      • bluefirebrand 2 days ago

        The test was "can I get it to generate this while spending less effort than it would take me to just write it" and it failed miserably. And this was a super low effort, small boilerplate problem to solve. This is the sort of problem it has to solve to be remotely useful

        If it cannot do that, then why is anyone saying it is a productivity booster?

        • knowaveragejoe a day ago

          My response could only possibly be that I haven't had that issue. I've asked for relatively complex changes to codebases(mainly python), and had very little in the way of trouble.

          • imiric a day ago

            ... That you're aware of.

            The more code you ask it to generate, the higher the chances that it will introduce an issue. Even if the code compiles, subtle bugs can easily creep in. Not unlike a human programmer, you might say, but a human programmer wouldn't hallucinate APIs. LLMs make entirely unique types of errors, and do so confidently.

            You really need to carefully review every single line, which sometimes takes more effort than just writing it yourself. I would be particularly wary of generated code for a dynamic language like Python.

rvz 2 days ago

This is the year 1999 again. You have companies that are valued tens of billions with no product AND no revenue.

There is also a race to zero where the best AI models are getting cheaper and big tech is there attempting to kill your startup (again) by lowering prices until it is free for as long as they want it.

More YC startups accepted are so-called AI startups are just vehicles for OpenAI to copy the best one and for the rest of the 90% of them to die.

This is an obvious bubble waiting to burst. With Big Tech coming out stronger, AI frontier companies becoming a new elite group "Big AI" and the so-called other startups getting wiped out.

zzzeek 2 days ago

Sorry, did you not notice the advertisement for "AI Startup School" at the bottom of Hacker News ? Ixnay on the egativity-nay, my friend !

  • xyst 2 days ago

    > … AI Startup School will gather 2000 of the top CS undergrads, masters, and PhD candidates in AI to hear from speakers including Elon Musk

    What a blunder by YC. What is this tool going to add to the conversation?

    Hope he gets removed from speaker list.

edanm a day ago

This article is only an opinion piece with no real evidence to back it up. I disagree with most of it. I'd argue against specifics, but there are no real specifics in the article, so I'm not sure I can do any better than say "no, I think you're wrong".

I also think it does the common-but-wrong thing of conflating between investment in big AI companies, and how useful GenAI is and will be. It's completely possible for the investments in OpenAI to end up worthless, and for it to collapse completely, while GenAI still ends up as big as most people clailm.

Lastly, I think this article severely downplays how useful LLMs are now.

> In my occupation of software development, querying ChatGPT and DeepSeek has largely replaced searching sites like StackOverflow. These chatbots generally save time with prompts like "write a TypeScript type declaration for objects that look like this", "convert this function from Python to Javascript", "provide me with a list of the scientific names of bird species mentioned directly or indirectly in this essay".

I mean, yes, they do that... but there are tools today that are starting to be able to look at a real codebase, get a prompt like "fix this bug" or "implement this feature", and actually do it. None of them are perfect yet, and they're all still limited... but I think you have to have zero imagination to think that they are going to stop exactly here.

I think even with no fundamental advances in the underlying tech, it is entirely possible we will be replacing most programming with prompting. I don't think that will make software devs obsolete, it might be the opposite - but "LLMs are a slightly better StackOverflow" is a huge understatement.

codingwagie 2 days ago

People are just click farming with these posts. The technology is ~4 years old. We are in the infancy of this, with hundreds of billions of capital behind making these systems work. Its one of the biggest innovations of the last 100 years.

I propose an internet ban for anyone calling the generative ai top, and a public tar and feathering

  • dwedge 2 days ago

    If you had said the last 20 years then maybe but last 100 years? For a really good autocomplete?

    Just think about the innovations over the last 100 years, how the world looked in 1925

    • dale_glass 2 days ago

      It's incredibly good autocomplete, though!

      I remember the 90s, and the time when translation programs were truly awful. They had pretty much no memory, not knowing what "it", or "she" might refer to, from the previous sentence. They tended to pick a translation based on the selected dictionary, and choked on typos and trademarks. A popular past time in my neck of the woods was feeding a translator the manual for a computer mouse, selecting the medical dictionary and giggling at all the incoherent talk about mouse testicles and their removal and cleaning procedures.

      In light of that, ChatGPT is scifi magic. It translates very well, it deals with informal speech, it can deal with typos, and you can even feed it a random meme JPG and it'll tell you what's it about. It's really like living in the future.

    • cyberlurker 2 days ago

      It’s absolutely amazing what these things can produce with simple prompts. I do think it deserves recognition as a top 5 innovation of the last 100 years. We can call it really good autocomplete but it really is incredible.

      That doesn’t mean I agree all jobs are going away, AGI is here, or all these other guesses people are making about the future.

    • wordpad a day ago

      I am with OP. Being able to have full meaningful conversations with a machine about any topic in existence is up there with invention of flight.

    • codingwagie 2 days ago

      Well human beings are just auto complete if thats all LLMs are

      • __loam 2 days ago

        Yes, the classic retort.

    • fragmede 2 days ago

      If all you've used it for is autocompleting code, you're doing yourself a disservice. In 1925, there were already cars and airplanes and telephones, even if they were still very new. There have been great leaps and advances in medicine and genetic engineering like CRISPR and mRNA vaccines. Computers and the Internet are the two really huge things from the past 100. The industrial revolution predates 1925. Where a man could make one widget before, with a machine, he could make a thousand of them. So there's a multiplicative effect from that machine. Where you needed 10 humans you could now use one. AI is still developing, but promise of it is even greater because we can, in the hoped for future, copy paste the human who does the work. It's a ludicrous idea, and the societal impact would be immense, and we're not there yet, but given that there were already telephones in 1925, an optimists view of what we're on the cusp of really is revolutionary.

      That, of course, is the optimist's view. If you cynically see LLMs as a dead end that won't ever get that far even with Moore's law, then any day now we're going to come to our senses and give up on the whole thing, but looking at how we've come to our senses about crypto and Bitcoin is now worth $0, all I can say is that I'm along for the ride.

  • parliament32 a day ago

    >We are in the infancy of this, with hundreds of billions of capital behind making these systems work

    Just like IoT, just like web3, just like blockchain, just like...

  • munchler 2 days ago

    Agreed. To me, this is reminiscent of the "dot-com bubble" 25 years ago. The internet changed the world permanently, even if the stock market got ahead of itself for a few years. The same is true of generative AI.

    https://en.wikipedia.org/wiki/Dot-com_bubble

    • dwedge 2 days ago

      Damn I feel old that you had to link the dotcom bubble

      • deadbabe 2 days ago

        The funny thing is, when the first dotcom bubble had been going on, people would have been making references back to the first AI winter in 1974-1980, where funding for AI related projects dropped precipitously as interest waned.

  • __loam 2 days ago

    People have been saying it's still early in crypto for over a decade.

    This much capital being poured into something and having very little to show for it is actually a bad sign, not a positive.

    Putting it on the same shelf as the transistor, the jet engine, and the nuclear bomb is pretty funny. It's a probabilistic token generator. Relax.

    • codingwagie 2 days ago

      this is nothing like crypto

      • __loam 2 days ago

        Besides the reliance on GPU compute, the insistence of its proponents that it is inevitable, and the plowing of resources into the space by venture capital, sure, it's nothing like crypto.

adpirz 2 days ago

Having used the latest models regularly, it does feel like we're at diminishing returns in terms of raw performance from GenAI / LLMs.

...but now it'll be exciting to let them bake. We need some time to really explore what we can do with them. We're still mostly operating in back-and-forth chats, I think there's going to be lots of experimentation with different modalities of interaction here.

It's like we've just gotten past the `Pets.com` era of GenAI and are getting ready to transition to the app era.

  • imiric a day ago

    I would really like to see the current generation of this tech make it into video games. Not the Sora type where the entire game is an interactive video (though that would be interesting to explore as well), but in more subtle and imaginative ways.

    Perhaps as an extension of procedural generation, in interesting mechanics such as [1], or eventually even fully interactive NPCs.

    PCs are starting to become more capable of running models locally, which can only make this tech more accessible. Like you say, we've barely begun exploring the possible use cases.

    [1]: https://infinite-craft.gg/

mordae 2 days ago

> has slackened modestly compared to late-2019 due to higher interest rates, the job market for less experienced developers seems positively dire.

Maybe in the US.

th0ma5 2 days ago

Is saying that you're critical of AI the new approach to being uncritical of it?

  • debacle 2 days ago

    I think at this point in the wave, the criticism starts to pop up here and there, but it's still decried. In 12-18 months, the momentum of the white hot VC injections over the last few years will sustain the wave for a time. By 27 or 28, the unicorn payoffs in the space will arise, and by 30 "everyone" will know that AI has been overhyped for a while.

    This person is just trying to get ahead of the game!

ninetyninenine 2 days ago

I still say it’s too early to tell.

It took a decade to reach LLMs. It will likely be another decade for agi. There is still clear trendline progress and we have clear real world targets of actual human level intelligence that exists so we know it can be done.

  • bigfishrunning 2 days ago

    A decade? What do you consider your start point here? Minsky wrote his NN thesis in 1954.

    • ninetyninenine a day ago

      obviously I'm referring to the AI revolution. Started by Hinton.

sunami-ai 2 days ago

This is all BS tbh. People don’t know how to use the current gen AI take to do very useful reasoning.

I keep posting our work as an example and NO ONE here (Old HN is dead) has managed to point out any reasoning issues (we redacted the in-between thinking most recently like the thinking traces that people were treating as the final answer)

I dare you to tell me this is not useful when we are signing up customers daily for trial:

Https://labs.sunami.ai/feed

  • vessenes 2 days ago

    You sound grumpy but I followed the link because I was curious. It’s super hard to use on mobile and I can’t really tell what it is even clicking and swiping a little. What is it? Congrats on the signups.

  • losteric 2 days ago

    I clicked your link and have absolutely no idea what I am looking at. There is overlapping text, obscure words without definition, terribly confusing UX.

    What is this?

    Who is it for?

    Why do you think this is demonstrates the OP is BS?

    • sunami-ai 2 days ago

      Read the reasoning after clicking in the stats or swipe to next page and click on the donuts. For in-house legal teams not for mobile users. Read the statements and reasoning. Tell me if the reasoning is wrong. One of the top 100 lawyers in US looked at it and told us he likes how it balances the intent of the law and no gripes with reasoning. Not using a reasoning model unde the hood. We built one for legal. It means that gen AI is useful and LLMs can be made to reason with inference time scaling. It’s not rocket science but also not easy. What hype? We think gen AI is actually under hyped.

      • jampekka 2 days ago

        Doesn't seem even wrong.

  • dwedge 2 days ago

    Nobody has interacted with your ad?

  • ludicrousdispla 2 days ago

    People that know they need useful reasoning are usually fairly good at it themselves.

  • ForHackernews 2 days ago

    What is this link supposed to illustrate? It doesn't render properly in Firefox - some CSS glitch, elements all on top of one another. Was this produced by AI?

    • sunami-ai 2 days ago

      Sorry it’s for corp legal users, so desktop only Not for mobile and never tested on Firefox.

      • layer8 2 days ago

        That’s no reason for the layout to break. I mean, CSS is kind of messy, I get it, but still.

      • fragmede 2 days ago

        yeah because no lawyers I know ever use smartphones for things.

        fix your shit.

        • fragmede a day ago

          it's a good thing for the creator that their dead reply is default hidden. I'd hate to work with someone who's response to my blunt, possibly rude comment is to call me stupid. whether or not I am, that's just not a good look.

dwedge 2 days ago

The thing that puts me off AI the most is that I feel it's only free/cheap while we train it, and in 3 or 4 years it will be a few thousand a month and only available for corporate

  • aetherson 2 days ago

    I am willing to bet you any amount of money you want that 3-4 years from now, current-generation AI models will cost less in real terms than they do now, not more.

    • QuadmasterXLII 2 days ago

      “In real terms” has to be clarified a bit in this context because in the full AI bull case, the value of your labor goes to zero and you can’t afford gruel except from your investments, or to the extent that your overlords deign to gift it to you. Inference per unit gruel will of course continue to get cheaper, so obviously this is the Good Future we should Seek to Bring About

      • aetherson 2 days ago

        "In real terms" has never once been used to describe an individual's situation. Like if you see a chart that has "real dollars" on it, it doesn't mean, "the author of this chart got a massive inheritance and so their money became less meaningful to them."

  • cyberlurker 2 days ago

    You can run distilled LLMs on your phone today that are surprisingly good. In 3-4 years I think an individual could self host for much less than thousands a month.

    This wouldn’t be my biggest worry.