ChatGPT Pulse

(openai.com)

379 points | by meetpateltech 9 hours ago

83 comments

  • SirensOfTitan
    8 hours ago
    My pulse today is just a mediocre rehash of prior conversations I’ve had on the platform.

    I tried to ask GPT-5 pro the other day to just pick an ambitious project it wanted to work on, and I’d carry out whatever physical world tasks it needed me to, and all it did was just come up with project plans which were rehashes of my prior projects framed as its own.

    I’m rapidly losing interest in all of these tools. It feels like blockchain again in a lot of weird ways. Both will stick around, but fall well short of the tulip mania VCs and tech leaders have pushed.

    I’ve long contended that tech has lost any soulful vision of the future, it’s just tactical money making all the way down.

    • jasonsb
      8 hours ago
      > I’m rapidly losing interest in all of these tools. It feels like blockchain again in a lot of weird ways.

      It doesn't feel like blockchain at all. Blockchain is probably the most useless technology ever invented (unless you're a criminal or an influencer who makes ungodly amounts of money off of suckers).

      AI is a powerful tool for those who are willing to put in the work. People who have the time, knowledge and critical thinking skills to verify its outputs and steer it toward better answers. My personal productivity has skyrocketed in the last 12 months. The real problem isn’t AI itself; it’s the overblown promise that it would magically turn anyone into a programmer, architect, or lawyer without effort, expertise or even active engagement. That promise is pretty much dead at this point.

      • jsheard
        8 hours ago
        > My personal productivity has skyrocketed in the last 12 months.

        Has your productivity objectively, measurably improved or does it just feel like it has improved? Recall the METR study which caught programmers self-reporting they were 20% faster with AI when they were actually 20% slower.

        • jasonsb
          8 hours ago
          Objectively. I’m now tackling tasks I wouldn’t have even considered two or three years ago, but the biggest breakthrough has been overcoming procrastination. When AI handles over 50% of the work, there’s a 90% chance I’ll finish the entire task faster than it would normally take me just to get started on something new.
          • kenjackson
            7 hours ago
            This. I had this long standing dispute that I just never had the energy to look up what needed to be done to resolve it. I just told it to ChatGPT and it generated everything -- including the emails I needed to send and who to send them to. Two weeks later and it was taken care of. I had sat on it for literally 3 months until then.

            If I could have something that said, "Here are some things that it looks like you're procrastinating on -- do you want me to get started on them for you?" -- that would probably be crazy useful.

            • meowface
              5 hours ago
              Exactly. Agentic LLMs are amazing for people who suffer from chronic akrasia.
            • lethologica
              1 hour ago
              I have ADHD and it almost acts as a body double for me, which I find to be incredibly helpful to get things done.
              • exogenousdata
                1 hour ago
                GPT-4 got me seriously considering making a product for school-age kids w/ ADHD. It’d be a physical device (like a StarTrek communicator). That listens during your day and keeps track of a) things you say that you’ll do or b) tasks that other people ask you to do. Then it compiles those tasks and attempts to be basically a secretary. It can also plug into your email, texts & school assignments.

                The privacy implications are horrifying. But if done right, you’re taking about a kind of digital ‘executive function’ that could help a lot of kids that struggle with things like prioritization and time blindness.

          • svachalek
            7 hours ago
            I don't think it's helped me do anything I couldn't do, in fact I've learned it's far easier to do hard things myself than trying to prompt an AI out of the ditches it will dig trying to do it. But I also find it's great for getting painful and annoying tasks out of the way that I really can't motivate to do myself.
            • cheema33
              3 hours ago
              > I don't think it's helped me do anything I couldn't do

              I am seeing a pattern here. It appears that AI isn't for everyone. Not everyone's personality may be a good fit for using AI. Just like not everybody is a good candidate for being a software dev, or police officer etc.

              I used to think that it is a tool. Like a car is. Everybody would want one. But that appears not be the case.

              For me, I used AI every day as a tool, for work and and home tasks. It is a massive help for me.

            • wiml
              5 hours ago
              I think there might be cases, for some people or some tasks, where the difficulty of filling in a blank page is greater than the difficulty of fixing an entire page of errors. Even if you have to do all the same mental work, it feels like a different category of work.
          • stavros
            5 hours ago
            Same. It takes the drudgery out of creating, so I can at least start the projects. Then I can go down into the detail just enough that the AI doesn't produce crap, but without needing to write the actual writes of code myself.

            Hell, in the past few days I started making something to help me write documents for work (https://www.writelucid.cc) and a viewer for all my blood tests (https://github.com/skorokithakis/bt-viewer), and I don't think I would have made either without an LLM.

            • lxgr
              5 hours ago
              Same here. I’ve single-shot created a few Raycast plugins for TLV decoding that save me several seconds to a few minutes per task which I use almost daily at work.

              Would have never done that without LLMs.

          • ryukoposting
            14 minutes ago
            What tasks is it doing 50% of the work on for you?
          • agency
            7 hours ago
            > I’m now tackling tasks I wouldn’t have even considered two or three years ago

            Ok, so subjective

            • dotslashmain
              7 hours ago
              any objective measure of "productivity" (when it comes to knowledge work) is, when you dig down into it enough, ultimately subjective.
            • moralestapia
              7 hours ago
              "Not done" vs "Done" is as objective as it gets.
              • miyoji
                7 hours ago
                You obviously have never worked a company that spends time arguing about the "definition of done". It's one of the most subjective topics I know about.
                • fbxio
                  5 hours ago
                  It removes ambiguity. Everyone knows when work is truly considered done, avoiding rework, surprises, and finger-pointing down the line.
                • vorticalbox
                  6 hours ago
                  At work we call this scope creep.
                • ycombigators
                  3 hours ago
                  Sounds like a company is not adequately defining what the deliverables are.

                  Task: Walk to the shops & buy some milk.

                  Deliverables: 1. Video of walking to the shops (including capturing the newspaper for that day at the local shop) 2. Reciept from local store for milk. 3. Physical bottle of Milk.

                  • tbossanova
                    2 hours ago
                    Cool, I went to the store and bought a 50ml bottle of probiotic coconut milk. Task done?
                    • ycombigators
                      19 minutes ago
                      Yes.

                      milk (noun):

                      1. A whitish liquid containing proteins, fats, lactose, and various vitamins and minerals that is produced by the mammary glands of all mature female mammals after they have given birth and serves as nourishment for their young.

                      2. The milk of cows, goats, or other animals, used as food by humans.

                      3. Any of various potable liquids resembling milk, such as coconut milk or soymilk.

          • OtherShrezzing
            6 hours ago
            > I’m now tackling tasks I wouldn’t have even considered two or three years ago

            Could you give some examples, and an indication of your level of experience in the domains?

            The statement has a much different meaning if you were a junior developer 2 years ago versus a staff engineer.

            • PeterFBell
              2 hours ago
              I have been coding on and off (more off than on) for 47 years. I kinda stopped paying attention when we got past jquery and was never a fan of prototypical inheritance. Never built anything with tailwind, Next.js, etc. After spending some time writing copy, user stories and a design brief (all iterative with ChatGPT) cursor one shot my (simple) web app and I was live (once I'd spent a couple hours documenting my requirements and writing my copy) in 20 minutes of vibe coding.

              I've been adding small features in a language I don't program in using libraries I'm not familiar with thhat meet my modest functional requirements in a couple minutes each. I work with an LLM to refine my prompt, put it into cursor, run the app locally, look at the diffs, commit, push and I'm live on vercel within a minute or two.

              I don't have any good metrics for productivity, so I'm 100% subjective but I can say that even if I'd been building in Rails (it's been ~4 years but I coded in it for a decade) it would have taken me at least 8 hours to have an app where I was happy with both the functionality and the look and feel so a 10x improvement in productivity for that task feels about right.

              And having a "buddy" I can discuss a project with makes activation energy lower allowing me to complete more.

              Also, YC videos I don't have the time to watch, I get a transcript, feed into chatGTP, ask for the key take aways I could apply to my business (it's in a project where it has context on stage, industry, maturity, business goals, key challenges, etc) so I get the benefits of 90 minutes of listening plus maybe 15 minutes of summarizing, reviewing and synthesis in typically 5-6 minutes - and it'd be quicker if I built a pipeline (something I'm vibe coding next month)

              Wouldn't want to do business without it.

        • jama211
          7 hours ago
          Not this again. That study had serious problems.

          But I’m not even going to argue about that. I want to raise something no one else seems to mention about AI in coding work. I do a lot of work now with AI that I used to code by hand, and if you told me I was 20% slower on average, I would say “that’s totally fine it’s still worth it” because the EFFORT level from my end feels so much less.

          It’s like, a robot vacuum might take way longer to clean the house than if I did it by hand sure. But I don’t regret the purchase, because I have to do so much less _work_.

          Coding work that I used to procrastinate about because it was tedious or painful I just breeze through now. I’m so much less burnt out week to week.

          I couldn’t care less if I’m slower at a specific task, my LIFE is way better now I have AI to assist me with my coding work, and that’s super valuable no matter what the study says.

          (Though I will say, I believe I have extremely good evidence that in my case I’m also more productive, averages are averages and I suspect many people are bad at using AI, but that’s an argument for another time).

          • troupo
            5 hours ago
            > Not this again. That study had serious problems.

            The problem is, there are very few if any other studies.

            All the hype around LLMs we are supposed to just believe. Any criticism is "this study has serious problems".

            > It’s like, a robot vacuum might take way longer

            > Coding work that I used to procrastinate

            Note how your answer to "the study had serious problems" is totally problem-free analogies and personal anecdotes.

            • keeda
              4 hours ago
              > The problem is, there are very few if any other studies.

              Not at all, the METR study just got a ton of attention. There are tons out there at much larger scales, almost all of them showing significant productivity boosts for various measures of "productivity".

              If you stick to the standard of "Randomly controlled trials on real-world tasks" here are a few:

              https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4945566 (4867 developers across 3 large companies including Microsoft, measuring closed PRs)

              https://www.bis.org/publ/work1208.pdf (1219 programmers at a Chinese BigTech, measuring LoC)

              https://www.youtube.com/watch?v=tbDDYKRFjhk (from Stanford, not an RCT, but the largest scale with actual commits from 100K developers across 600+ companies, and tries to account for reworking AI output. Same guys behind the "ghost engineers" story.)

              If you look beyond real-world tasks and consider things like standardized tasks, there are a few more:

              https://ieeexplore.ieee.org/abstract/document/11121676 (96 Google engineers, but same "enterprise grade" task rather than different tasks.)

              https://aaltodoc.aalto.fi/server/api/core/bitstreams/dfab4e9... (25 professional developers across 7 tasks at a Finnish technology consultancy.)

              They all find productivity boosts in the 15 - 30% range -- with a ton of nuance, of course. If you look beyond these at things like open source commits, code reviews, developer surveys etc. you'll find even more evidence of positive impacts from AI.

              • risyachka
                4 hours ago
                Closed PRs, commits, loc etc are useless vanity metrics.

                With ai code you have more loc and NEED more PRs to fix all its slop.

                In the end you have increased numbers with net negative effect

                • keeda
                  3 hours ago
                  Most of those studies call this out and try to control for it (edit: "it" here being the usual limitations of LoC and PRs as measures of productivity) where possible. But to your point, no, there is still a strong net positive effect:

                  > https://www.youtube.com/watch?v=tbDDYKRFjhk (from Stanford, not an RCT, but the largest scale with actual commits from 100K developers across 600+ companies, and tries to account for reworking AI output. Same guys behind the "ghost engineers" story.)

                  Emphasis added. They modeled a way to detect when AI output is being reworked, and still find a 15-20% increase in throughput. Specific timestamp: https://youtu.be/tbDDYKRFjhk?t=590&si=63qBzP6jc7OLtGyk

                  • tbossanova
                    2 hours ago
                    Could you try to avoid uncertainties like this by measuring something like revenue growth before and after AI? Given enough data.
                    • keeda
                      1 hour ago
                      Hmm, not an economist but I have seen other studies that look at things at the firm level, so definitely should be possible. A quick search on Google and SSRN didn't turn up some studies but they seem to focus on productivity rather than revenues, not sure why. Maybe because such studies depend on the available data, however, so a lot of key information may be hidden, e.g. revenues of privately held companies which constitute a large part of the economy.
                      • tbossanova
                        1 hour ago
                        True it probably would be difficult to gather representative data. Also might be hard to seperate out broader economic effects e.g. overall upturns.
          • blks
            6 hours ago
            Often someone’s personal productivity with AI means someone else have to dig through their piles of rubbish to review PR they committed.

            In your particular case it sounds like you’re rapidly loosing your developer skills, and enjoy that now you have to put less effort and think less.

            • wussboy
              6 hours ago
              We know that relying heavily on Google Maps makes you less able to navigate without Google Maps. I don't think there's research on this yet, but I would be stunned if the same process isn't at play here.
              • fbxio
                5 hours ago
                Whatever your mind believes it doesn’t need to hold on to that what is expensive to maintain and run, it’ll let go of. This isn’t entirely accurate from a neuroscience perspective but it’s kinda ballpark.

                Pretty much like muscles decay when we stop using them.

                • lxgr
                  5 hours ago
                  Sure, but sticking with that analogy, bicycles haven’t caused the muscles of people that used to go for walks and runs to atrophy either – they now just go much longer distances in the same time, with less joint damage and more change in scenery :)
                  • Taganov
                    4 hours ago
                    Oh, but they do atrophy, and in devious ways. Though the muscles under linear load may stay healthy, the ability of the body to handle the knee, ankle, and hip joints under dynamic and twisting motion does atrophy. Worse yet, one may think that they are healthy and strong, due to years of biking, and unintentionally injure themselves when doing more dynamic sports.

                    Take my personal experience for whatever it is worth, but my knees do not lie.

                  • gf000
                    4 hours ago
                    Well, you still walk in most indoor places, even if you are on the bike as much as humanly possible.

                    But if you were to be literally chained to a bike, and could not move in any other way than surely you would "forget"/atrophy in specific ways that you wouldn't be able to walk without relearning/practicing.

                  • contrariety
                    5 hours ago
                    To extend the analogy further, people who replace all their walking and other impact exercises with cycling tend to end up with low bone density and then have a much higher risk of broken legs when they get older.
              • lukan
                5 hours ago
                I know that I am better at navigating with google maps than average people, because I navigated for years without it (partly on purpose). I know when not to trust it. I know when to ignore recommendations on recalculated routes.

                Same with LLMs. I am better with it, because I know how to solve things without the help of it. I understand the problem space and the limitations. Also I understand how hype works and why they think they need it (investors money).

                In other words, no, just using google maps or ChatGPT does not make me dumb. Only using it and blindly trusting it would.

            • al_be_back
              1 hour ago
              Gets worse for projects outsourced to 1+ Consultancy firms, where staff costs are prohibitively high, now you've got another layer of complexity to factor in (risks, costs).

              Consultancy A submit work, Consultancy B reviews/tests. As A increases the use of AI, B will have to match with more staff or more AI. More staff for B, mean higher costs, at slower pace. More AI for B, means higher burden of proof, an A vs B race condition is likely.

              Ultimately clients will suffer from AI fatigue and inadvertently incur more costs at later stage (post-delivery).

            • lxgr
              5 hours ago
              Another way of viewing it would be that LLMs allow software developers to focus their development skills where it actually matters (correctness, architecture etc.), rather than wasting hours catering to the framework or library of the day’s configuration idiosyncrasies.

              That stuff kills my motivation to solve actual problems like nothing else. Being able to send off an agent to e.g. fix some build script bug so that I can get to the actual problem is amazing even with only a 50% success rate.

              • iLoveOncall
                3 hours ago
                The path forward here is to have better frameworks and libraries, not to rely on a random token generator.
                • djeastm
                  3 hours ago
                  >better frameworks and libraries

                  I feel like the past few decades of framework churn has shown that we're really never going to agree on what this means

            • rapind
              1 hour ago
              > In your particular case it sounds like you’re rapidly loosing your developer skills, and enjoy that now you have to put less effort and think less.

              Just the other day I was complaining that no one knows how to use a slide rule anymore...

              Also C++ is producing bytecode that's hot garbage. It's like no one understands assembly anymore...

              Even simple tools are often misused (like hammering a screw). Sometimes they are extremely useful in right hands though. I think we'll discover that the actual writing of code isn't as meaningful as thinking about code.

            • fbxio
              5 hours ago
              I’d love not to have to be great at programming, as much as I enjoy not being great at cleaning the canalization. But I get what you mean, we do lose some potentially valuable skills if we outsource them too often for too long.
              • lxgr
                5 hours ago
                It’s probably roughly as problematic as most people not being able to fix even simple problems with their cars themselves these days (i.e., not very).
            • stavros
              5 hours ago
              I'm losing my developer skills like I lost my writing skills when I got a keyboard. Yes, I can no longer write with a pen, but that doesn't mean I can't write.
            • danenania
              3 hours ago
              My own code quality is better with AI, because it makes it feasible to indulge my perfectionism to a much greater degree. Before AI, I usually needed to stop sooner than I would have liked to and call it good enough. Now I can justify making everything much more robust because it doesn’t take a lot longer.

              It’s the same story with UI/UX. Previously, I’d often have to skip little UI niceties because they take time and aren’t that important. Now even relatively minor user flows can be very well polished because there isn’t much cost to doing so.

        • rpdillon
          6 hours ago
          My personal project output has gone up dramatically since I started using AI, because I can now use times of night where I'm otherwise too mentally tired, to work with AI to crank through a first draft of a change that I can then iterate on later. This has allowed me to start actually implementing side projects that I've had ideas about for years and build software for myself in a way I never could previously (at least not since I had kids).

          I know it's not some amazing GDP-improving miracle, but in my personal life it's been incredibly rewarding.

          • animex
            5 hours ago
            This, 1000x.

            I had a dozen domains and projects on the shelf for years and now 8 of them have significant active development. I've already deployed 2 sites to production. My github activity is lighting up like a Christmas tree.

          • boogieknite
            4 hours ago
            i find a lot of value in using it to give half baked ideas momentum. some sort of "shower thought" will occur to me for a personal project while im at work and ill prompt Claude code to analyze and demonstrate an implementation for review later

            on the other hand i believe my coworker may have taken it too far. it seems like productivity has significantly slipped. in my perception the approaches hes using are convoluted and have no useful outcome. im almost worried about him because his descriptions of what hes doing make no sense to me or my teammates. hes spending a lot of time on it. im considering telling him to chill out but who knows, maybe im just not as advanced a user as him? anyone have experience with this?

            • r_lee
              3 hours ago
              Do you mean like convoluted agentic stuff, markdown files etc? Or like AI delusion?
              • boogieknite
                1 hour ago
                the former

                it started as an approach to a mass legacy code migration. sound idea with potential to save time. i followed along and understood his markdown and agent stuff for analyzing and porting legacy code

                i reviewed results which apply to my projects. results were mixed bag but i think it saved some time overall. but now i dont get where hes going with his ai aspirations

                my best attempt to understand is he wants to work entirely though chats, no writing code, and hes doing so by improving agents through chats. hes really swept up in the entire concept. i consider myself optimistic about ai but his enthusiasm feels misplaced

                its to the point where his work is slipping and management is asking him where his results are. were a small team and management isnt savvy enough to see hes getting NOTHING done and i wont sell him out. however if this is a known delusional pattern id like to address it and point to a definition and/or past cases so he can recognize the pattern and avoid trouble

        • mountainriver
          11 minutes ago
          There have been plenty of studies showing the opposite. Also a sample size of 16 ain’t much
        • james_marks
          6 hours ago
          Yesterday is a good example- in 2 days, I completed what I expected to be a week’s worth of heads-down coding. I had to take a walk and make all new goals.

          The right AI, good patterns in the codebase and 20 years of experience and it is wild how productive I can be.

          Compare that to a few years ago, when at the end of the week, it was the opposite.

          • al_be_back
            1 hour ago
            makes no sense. how are you comparing yesterday with a "a few years ago" ?
        • Kiro
          5 hours ago
          The "you only think you're more productive" argument is tiresome. Yes, I know for sure that I'm more productive. There's nothing uncertain about it. Does it lead to other problems? No doubt, but claiming my productivity gains are imaginary is not serious.

          I've seen a lot of people who previously touted that it doesn't work at all use that study as a way to move the goalpost and pretend they've been right all along.

          • chrysoprace
            4 hours ago
            I would be interested to know how you measure your productivity gains though, in an objective way where you're not the victim of bias.

            I just recently had to rate whether I felt like I got more done by leaning more on Claude Code for a week to do a toy project and while I _feel_ like I was more productive, I was already biased to think so, and so I was a lot more careful with my answer, especially as I had to spend a considerable amount of time either reworking the generated code or throwing away several hours of work because it simply made things up.

            • Kiro
              3 hours ago
              It sounds like you're very productive without AI or that your perceived gains are pretty small. To me, it's such a stark contrast that asking how I measure it is like asking me to objectively verify that a car is faster than walking.
          • m_fayer
            4 hours ago
            We’re being accused of false consciousness!
        • yokoprime
          6 hours ago
          I'm objectively faster. Not necessarily if I'm working on a task I've done routinely for years, but when taking on new challenges I'm up and running much faster. A lot of it have to do with me offloading doing the basic research while allowing myself to be interrupted; it's not a problem that people reach out with urgent matters while I'm taking on a challenge I've only just started to build towards. Being able to correct the ai where I can tell it's making false assumptions or going off the rails helps speed things up
        • tomrod
          3 hours ago
          I'm not who you responded to. I see about a 40% to 60% speed up as a solution architect when I sit down to code and about a 20% speedup when building/experimenting with research artifacts (I write papers occasionally).

          I have always been a careful tester, so my UAT hasn't blown up out of proportion.

          The big issue I see is rust it generates code using 2023-recent conventions, though I understand there is some improvement in thst direction.

          Our hiring pipeline is changing dramatically as well, since the normal things a junior needs to know (code, syntax) is no longer as expensive. Joel Spolsky's mantra to higher curious people who get things done captures well the folks I find are growing well as juniors.

        • CuriouslyC
          7 hours ago
          If you want another data point, you can just look at my company github (https://github.com/orgs/sibyllinesoft/repositories). ~27 projects in the last 5 weeks, probably on the order of half a million lines of code, and multiple significant projects that are approaching ship readiness (I need to stop tuning algorithms and making stuff gorgeous and just fix installation/ensure cross platform is working, lol).
          • thegrim33
            6 hours ago
            I don't do Rust or Javascript so I can't judge, but I opened a file at random and feel like the commenting probably serves as a good enough code smell.

            From the one random file I opened:

            /// Real LSP server implementation for Lens pub struct LensLspServer

            /// Configuration for the LSP server

            pub struct LspServerConfig

            /// Convert search results to LSP locations

            async fn search_results_to_locations()

            /// Perform search based on workspace symbol request

            async fn search_workspace_symbols()

            /// Search for text in workspace

            async fn search_text_in_workspace()

            etc, etc, etc, x1000.

            I don't see a single piece of logic actually documented with why it's doing what it's doing, or how it works, or why values are what they are, nearly 100% of the comments are just:

            function-do-x() // Function that does x

            • CuriouslyC
              5 hours ago
              Sure, this is a reasonable point, but understand that documentation passes come late, because if you do heavy documentation refinement on a product under feature/implementation drift you just end up with a mess of stale docs and repeated work.
            • oofbey
              6 hours ago
              Early coding agents wanted to do this - comment every line of code. You used to have to yell at them not to. Now they’ve mostly stopped doing this at all.
          • sebastiennight
            5 hours ago
            First off, congrats on the progress.

            Second, as you seem to be an entrepreneur, I would suggest you consider adopting the belief that you've not been productive until the thing's shipped into prod and available for purchase. Until then you've just been active.

          • blks
            6 hours ago
            So your company is actively shipping tens of thousands of AI-generated lines of code?
          • mym1990
            6 hours ago
            Sooo you launched https://sibylline.dev/, which looks like a bunch of AI slop, then spun up a bunch of GitHub repos, seeded them with more AI slop, and tout that you're shipping 500,000 lines of code?

            I'll pass on this data point.

            • CuriouslyC
              5 hours ago
              [flagged]
              • grayhatter
                5 hours ago
                I mean, you're slinging insults so it's hard for me agree he's the toxic person in this conversation...
          • shafyy
            7 hours ago
            Lines of codes is not a measure of anything meaningful on its own. The mere fact that you suggest this as prove that you are more productive makes me think you are not.
            • CuriouslyC
              7 hours ago
              The SWE industry is eagerly awaiting your proposed accurate metric.

              I find that people who dismiss LoC out of hand without supplying better metrics tend to be low performers trying to run for cover.

              • wiml
                5 hours ago
                You're new to the industry, aren't you?
              • dmamills
                5 hours ago
                A metric I'd be interested in is the number of clients you can convince to use this slop.
                • CuriouslyC
                  5 hours ago
                  That's a sales metric brother.
              • troupo
                5 hours ago
                > The SWE industry is eagerly awaiting your proposed accurate metric.

                There are none. All are various variant of bad. LoC is probably the worst metric of all. Because it says nothing about quality, or features, or number of products shipped. It's also the easiest metric to game. Just write GoF-style Java, and you're off to the races. Don't forget to have a source code license at the beginning of every file. Boom. LoC.

                The only metrics that barely work are:

                - features delivered per unit of time. Requires an actual plan for the product, and an understanding that some features will inevitably take a long time

                - number of bugs delivered per unit of time. This one is somewhat inversely correlated with LoC and features, by the way: the fewer lines of code and/or features, the fewer bugs

                - number of bugs fixed per unit of time. The faster bugs are fixed the better

                None of the other bullshit works.

              • oofbey
                6 hours ago
                Loc is so easy to game. Reformat. Check in a notebook. Move things around. Pointless refactor.

                If nobody is watching loc, it’s generally a good metric. But as soon as people start valuing it, it becomes useless.

        • logicprog
          8 hours ago
          The design of that study is pretty bad, and as a result it doesn't end up actually showing what it claims to show / what people claim it does.

          https://www.fightforthehuman.com/are-developers-slowed-down-...

          • singron
            6 hours ago
            I don't think there is anything factually wrong with this criticism, but it largely rehashes caveats that are already well explored in the original paper, which goes through unusual lengths to clearly explain many ways the study is flawed.

            The study gets so much attention since it's one of the few studies on the topic with this level of rigor on real-world scenarios, and it explains why previous studies or anecdotes may have claimed perceived increases in productivity even if there wasn't any actual increases. It clearly sets a standard that we can't just ask people if they felt more productive (or they need to feel massively more productive to clearly overcome this bias).

        • m_fayer
          4 hours ago
          It seems like the programming world is increasingly dividing into “LLMs for coding are at best marginally useful and produce huge tech debt” vs “LLMs are a game changing productivity boost”.

          I truly don’t know how to account for the discrepancy, I can imagine many possible explanations.

          But what really gets my goat is how political this debate is becoming. To the point that the productivity-camp, of which I’m a part, is being accused of deluding themselves.

          I get that OpenAI has big ethical issues. And that there’s a bubble. And that ai is damaging education. And that it may cause all sorts of economic dislocation. (I emphatically Do Not get the doomers, give me a break).

          But all those things don’t negate the simple fact that for many of us, LLMs are an amazing programming tool, and we’ve been around long enough to distinguish substance from illusion. I don’t need a study to confirm what’s right in front of me.

          • Sharlin
            2 hours ago
            I’d love to know whether and to what extent the people for which AI has been a huge boost are those who were already producing slop, and now they have AI that can produce that slop much faster.
        • tunesmith
          6 hours ago
          Data point: I run a site where users submit a record. There was a request months ago to allow users to edit the record after submitting. I put it off because while it's an established pattern it touches a lot of things and I found it annoying busy work and thus low priority. So then gpt5-codex came out and allowed me to use it in codex cli with my existing member account. I asked it to support edit for that feature all the way through the backend with a pleasing UI that fit my theme. It one-shotted it in about ten minutes. I asked for one UI adjustment that I decided I liked better, another five minutes, and I reviewed and released it to prod within an hour. So, you know, months versus an hour.
          • paulryanrogers
            3 hours ago
            Is the hour really comparable to months spent not working on it?
            • rpdillon
              3 hours ago
              He's referring to the reality that AI helps you pick up and finish tasks that you otherwise would have put off. I see this all day every day with my side projects as well as security and customer escalations that come into my team. It's not that Giant Project X was done six times as fast. It's more like we were able to do six small but consequential bug fixes and security updates while we continued to push on the large feature.
        • citizenkeen
          3 hours ago
          I have a very big hobby code project I’ve been working on for years.

          AI has not made me much more productive at work.

          I can only work on my hobby project when I’m tired after the kids go to bed. AI has made me 3x productive there because reviewing code is easier than architecting. I can sense if it’s bad, I have good tests, the requests are pretty manageable (make a new crud page for this DTO using app conventions).

          But at work where I’m fresh and tackling hard problems that are 50% business political will? If anything it slows me down.

        • bozhark
          6 hours ago
          Yes, for me.

          Instead of getting overwhelmed doing to many things, I can offload a lot of menial and time-driven tasks

          Reviews are absolutely necessary but take less time than creation

      • swalsh
        8 hours ago
        " Blockchain is probably the most useless technology ever invented "

        Actually AI may be more like blockchain then you give it credit for. Blockchain feels useless to you because you either don't care about or value the use cases it's good for. For those that do, it opens a whole new world they eagerly look forward to. As a coder, it's magical to describe a world, and then to see AI build it. As a copyeditor it may be scary to see AI take my job. Maybe you've seen it hilucinate a few times, and you just don't trust it.

        I like the idea of interoperable money legos. If you hate that, and you live in a place where the banking system is protected and reliable, you may not understand blockchain. It may feel useless or scary. I think AI is the same. To some it's very useful, to others it's scary at best and useless at worst.

        • boc
          7 hours ago
          Blockchain is essentially useless.

          You need legal systems to enforce trust in societies, not code. Otherwise you'll end up with endless $10 wrench attacks until we all agree to let someone else hold our personal wealth for us in a secure, easy-to-access place. We might call it a bank.

          The end state of crypto is always just a nightmarish dystopia. Wealth isn't created by hoarding digital currency, it's created by productivity. People just think they found a shortcut, but it's not the first (or last) time humans will learn this lesson.

          • abraxas
            3 hours ago
            I call blockchain an instantiation of Bostrom's Paperclip Maximizer running on a hybrid human-machine topology.

            We are burning through scarce fuel in amounts sufficient to power a small developed nation in order to reverse engineer... one way hashcodes! Literally that is even less value than turning matter into paperclips.

          • anonandwhistle
            4 hours ago
            Humanities biggest ever wealth storing thing is literally a ROCK
        • esafak
          7 hours ago
          People in countries with high inflation or where the banking system is unreliable are not using blockchains, either.
          • eric_cc
            5 hours ago
            Do you have any proof to support this claim? Stable coins use alone is in the 10's (possibly hundreds now) of billions in daily transaction globally. I'd be interested to hear your source for your claim.
            • esafak
              4 hours ago
              I'm from a high inflation country. Let's see your evidence of use in such countries, since you are throwing numbers out.
        • wat10000
          6 hours ago
          It may not be the absolute most useless, but it's awfully niche. You can use it to transfer money if you live somewhere with a crap banking system. And it's very useful for certain kinds of crime. And that's about it, after almost two decades. Plenty of other possibilities have been proposed and attempted, but nothing has actually stuck. (Remember NFTs? That was an amusing few weeks.) The technology is interesting and cool, but that's different from being useful. LLM chatbots are already way more generally useful than that and they're only three years old.
        • yieldcrv
          7 hours ago
          "I'm not the target audience and I would never do the convoluted alternative I imagined on the spot that I think are better than what blockchain users do"
      • glouwbug
        3 hours ago
        This is funny, because my personal view is that AI’s biggest pitfall is that it allows the unqualified to build what they think they’re qualified for
      • Terr_
        5 hours ago
        > the most useless technology

        Side-rant pet-peeve: People who try to rescue the reputation of "Blockchain" as a promising way forward by saying its weaknesses go away once you do a "private blockchain."

        This is equivalent to claiming the self-balancing Segway vehicles are still the future, they just need to be "improved even more" by adding another set of wheels, an enclosed cabin, and disabling the self-balancing feature.

        Congratulations, you've backtracked back to a classic [distributed database / car].

      • Geste
        2 hours ago
        Bad take about blockchain. Being able to send value across borders without intermediaries is unheard of in human history.
      • ericfr11
        25 minutes ago
        Take a look at new payments protocols for m AI agents
      • domatic1
        5 hours ago
        >> Blockchain is probably the most useless technology ever invented

        so useless there is almost $3 Trillion of value on blockchains.

        • davidcbc
          4 hours ago
          No there isn't. These ridiculous numbers are made up by taking the last price a coin sold for and multiplying it by all coins. If I create a shitcoin with 1 trillion coins and then sell one to a friend for $1 I've suddenly created a coin with $1 trillion in "value"
          • fylo
            3 hours ago
            I'm not sure the 3 trillion includes shit coin valuations. Typically volume and liquidity are factored in for any serious analysis. Your assessment of valuation is probably just as true for traditional currencies and stocks. I guess the main difference is regulation.
        • unbalancedevh
          5 hours ago
          Unfortunately, the amount of money invested in something isn't indicative of it's utility. For example: the tulip mania, beanie babies, NFTs, etc.
      • 9rx
        8 hours ago
        > AI is a powerful tool for those who are willing to put in the work.

        No more powerful than I without the A. The only advantage AI has over I is that it is cheaper, but that's the appeal of the blockchain as well: It's cheaper than VISA.

        The trouble with the blockchain is that it hasn't figured out how to be useful generally. Much like AI, it only works in certain niches. The past interest in the blockchain was premised on it reaching its "AGI" moment, where it could completely replace VISA at a much lower cost. We didn't get there and then interest started to wane. AI too is still being hyped on future prospects of it becoming much more broadly useful and is bound to face the same crisis as the blockchain faced if AGI doesn't arrive soon.

        • fn-mote
          8 hours ago
          Blockchain only solves one problem Visa solves: transferring funds. It doesn't solve the other problems that Visa solves. For example, there is no way to get restitution in the case of fraud.
          • 9rx
            7 hours ago
            Yes, that is one of the reasons it hasn't been able to be used generally. But AI can't be used generally either. Both offer niche solutions for those with niche problems, but that's about it. They very much do feel the same, and they are going to start feeling even more the same if AGI doesn't arrive soon. Don't let the niche we know best around here being one of the things AI is helping to solve cloud your vision of it. The small few who were able to find utility in the blockchain thought it was useful too
        • jama211
          7 hours ago
          But an I + and AI (as in a developer with access to AI tools) is as near as makes no difference the same price as just an I, and _can_ be better than just an I.
        • buffalobuffalo
          7 hours ago
          Blockchain only has 2 legitimate uses (from an economic standpoint) as far as I can tell.

          1) Bitcoin figured out how to create artificial scarcity, and got enough buy-in that the scarcity actually became valuable.

          2)Some privacy coins serve an actual economic niche for illegal activity.

          Then there's a long list of snake oil uses, and competition with payment providers doesn't even crack the top 20 of those. Modern day tulip mania.

          • 9rx
            7 hours ago
            Sounds like LLMs. The legitimate uses are:

            1) Langauge tasks.

            2) ...

            I can't even think of what #2 is. If the technology gets better at writing code perhaps it can start to do other things by way of writing software to do it, but then you effectively have AGI, so...

      • antihero
        5 hours ago
        It sounds like you are lacking inspiration. AI is a tool for making your ideas happen not giving you ideas.
      • eric_cc
        5 hours ago
        > Blockchain is probably the most useless technology ever invented (unless you're a criminal or an influencer who makes ungodly amounts of money off of suckers)

        This is an incredibly uneducated take on multiple levels. If you're talking about Bitcoin specifically, even though you said "blockchain", I can understand this as a political talking about 8 years ago. But you're still banging this drum despite the current state of affairs? Why not have the courage to say you're politically against it or bitter or whatever your true underlying issue is?

      • otabdeveloper4
        2 hours ago
        > Blockchain is probably the most useless technology ever invented

        You can use blockchains to gamble and get rich quick, if you're lucky.

        That's a useful thing. Unlike "AI", which only creates more blogspam and technical debt in the world.

      • coolestguy
        3 hours ago
        >Blockchain is probably the most useless technology ever invented (unless you're a criminal or an influencer who makes ungodly amounts of money off of suckers).

        You think a technology that allows millions of people all around the world to keep & trustlessly update a database, showing cryptographic ownership of something "the most useless technology ever invented"?

      • oblio
        7 hours ago
        > My personal productivity has skyrocketed in the last 12 months.

        If you don't mind me asking, what do you do?

    • jedbrooke
      6 hours ago
      > I tried to ask GPT-5 pro the other day to just pick an ambitious project it wanted to work on, and I’d carry out whatever physical world tasks it needed me to, and all it did was just come up with project plans which were rehashes of my prior projects framed as its own.

      Mate, I think you’ve got the roles of human and AI reversed. Humans are supposed to come up with creative ideas and let machines do the tedious work of implementation. That’s a bit like asking a calculator what equations you should do or a DB what queries you should make. These tools exist to serve us, not the other way around

      GPT et al. can’t “want” anything, they have no volition

    • nilkn
      1 hour ago
      Try turning off memory. I've done a lot of experiments and find ChatGPT is objectively better and more useful in most ways with no memory at all. While that may seem counter-intuitive, it makes sense the more you think about it:

      (1) Memory is primarily designed to be addictive. It feels "magical" when it references things it knows about you. But that doesn't make it useful.

      (2) Memory massively clogs the context window. Quality, accuracy, and independent thought all degrade rapidly with too much context -- especially low-quality context that you can't precisely control or even see.

      (3) Memory makes ChatGPT more sychophantic than it already is. Before long, it's just an echo chamber that can border on insanity.

      (4) Memory doesn't work the way you think it does. ChatGPT doesn't reference everything from all your chats. Rather, your chat history gets compressed into a few information-dense paragraphs. In other words, ChatGPT's memory is a low-resolution, often inaccurate distortion of all your prior chats. That distortion then becomes the basis of every single subsequent interaction you have.

      Another tip is to avoid long conversations, as very long chats end up reproducing within themselves the same problems as above. Disable memory, get what you need out of a chat, move on. I find that this "brings back" a lot of the impressiveness of the early version of ChatGPT.

      Oh, and always enable as much thinking as you can tolerate to wait on for each question. In my experience, less thinking = more sychophantic responses.

      • barneysaurus
        1 hour ago
        I might want to have an LLM hit me with temperature 100% weird-ass entropic thoughts every day.

        Other that that, what recycled bullshit would I care about?

    • Dilettante_
      8 hours ago
      >pick an ambitious project it wanted to work on

      The LLM does not have wants. It does not have preferences, and as such cannot "pick". Expecting it to have wants and preferences is "holding it wrong".

      • CooCooCaCha
        8 hours ago
        LLMs can have simulated wants and preferences just like they have simulated personalities, simulated writing styles, etc.

        Whenever you message an LLM it could respond in practically unlimited ways, yet it responds in one specific way. That itself is a preference honed through the training process.

      • andrewmcwatters
        8 hours ago
        At best, it has probabilistic biases. OpenAI had to train newer models to not favor the name "Lily."

        They have to do this manually for every single particular bias that the models generate that is noticed by the public.

        I'm sure there are many such biases that aren't important to train out of responses, but exist in latent space.

        • jhickok
          7 hours ago
          >At best, it has probabilistic biases.

          What do you think humans have?

          • measurablefunc
            6 hours ago
            Genetic drives & biological imperatives.
            • djeastm
              2 hours ago
              Perhaps you should define these terms so people aren't arguing against something you're not saying.
            • simianwords
              5 hours ago
              Why is this not fundamentally a probabilistic bias?
              • measurablefunc
                4 hours ago
                When drawing an equivalence the burden of proof is on the person who believes two different things are the same. The null hypothesis is that different things are in fact different. Present a coherent argument & then you will see whether your question makes any sense or not.
              • gf000
                4 hours ago
                It's not a static bias. I can experience new stuff, and update my biases.

                LLMs need a retrain for that.

            • ewild
              6 hours ago
              Soo probabilistic biases.
      • password54321
        8 hours ago
        So are we near AGI or is it 'just' an LLM? Seems like no one is clear on what these things can and cannot do anymore because everyone is being gaslighted to keep the investment going.
        • monsieurbanana
          8 hours ago
          The vast majority of people I've interacted with is clear on that, we are not near AGI. And people saying otherwise are more often than not trying to sell you something, so I just ignore them.

          CEO's are gonna CEO, it seems their job has morphed into creative writing to maximize funding.

        • bonoboTP
          6 hours ago
          Nobody knows how far scale goes. People have been calling the top of the S-curve for many years now, and the models keep getting better, and multimodal. In a few years, multimodal, long-term agentic models will be everywhere including in physical robots in various form factors.
        • wrs
          7 hours ago
          Be careful with those "no one" and "everyone" words. I think everyone I know who is a software engineer and has experience working with LLMs is quite clear on this. People who aren't SWEs, people who aren't in technology at all, and people who need to attract investment (judged only by their public statements) do seem confused, I agree.
        • IanCal
          7 hours ago
          No one agrees on what agi means.

          IMO we’re clearly there, gpt5 would easily be considered agi years ago. I don’t think most people really get how non-general things were that are now handled by the new systems.

          Now agi seems to be closer to what others call asi. I think k the goalposts will keep moving.

          • 9rx
            7 hours ago
            Definitions do vary, but everyone agrees that it requires autonomy. That is ultimately what sets AGI apart from AI.

            The GPT model alone does not offer autonomy. It only acts in response to explicit input. That's not to say that you couldn't built autonomy on top of GPT, though. In fact, that appears to be exactly what Pulse is trying to accomplish.

            But Microsoft and OpenAI's contractual agreements state that the autonomy must also be economically useful to the tune of hundreds of billions of dollars in autonomously-created economic activity, so OpenAI will not call it as such until that time.

            • gjgtcbkj
              5 hours ago
              ChatGPT is more antonymous than many humans. Especially poor ones and disabled ones.
        • Cloudef
          8 hours ago
          There is no AGI. LLMs are very expensive text auto-completion engines.
        • andrewmcwatters
          8 hours ago
          It will always just be a series of models that have specific training for specific input classes.

          The architectural limits will always be there, regardless of training.

          • beering
            2 hours ago
            That’s an interesting point. It’s not hard to imagine that LLMs are much more intelligent in areas where humans hit architectural limitations. Processing tokens seems to be a struggle for humans (look at how few animals do it overall, too), but since so much of the human brain is dedicated to movement planning, it makes sense that we still have an edge there.
      • simianwords
        6 hours ago
        This comment is surprising. Of course it can have preferences and of course it can "pick".
        • datadrivenangel
          6 hours ago
          preference generally has connotations of personhood / intellegence, so saying that a machine prefers something and has preferences is like saying that a shovel enjoys digging...

          Obviously you can get probability distributions and in an economics sense of revealed preference say that because the model says that the next token it picks is .70 most likely...

          • simianwords
            6 hours ago
            you can change preferences by doing RLHF or changing the prompt. there's a whole field on it: alignment.
          • oofbey
            5 hours ago
            A key point of the Turing Test was to stop the debates over what constitutes intelligence or not and define something objectively measurable. Here we are again.

            If a model has a statistical tendency to recommend python scripts over bash, is that a PREFERENCE? Argue it’s not alive and doesn’t have feelings all you want. But putting that aside, it prefers python. Saying the word preference is meaningless is just pedantic and annoying.

        • oofbey
          5 hours ago
          I agree with you, but I don’t find the comment surprising. Lots of people try to sound smart about AI by pointing out all the human things that AI are supposedly incapable of on some fundamental level. Some AI’s are trained to regurgitate this nonsense too. Remember when people used to say “it can’t possibly _____ because all it’s doing is predicting the next most likely token”? Thankfully that refrain is mostly dead. But we still have lots of voices saying things like “AI can’t have a preference for one thing over another because it doesn’t have feelings.” Or “AI can’t have personality because that’s a human trait.” Ever talk to Grok?
      • ACCount37
        8 hours ago
        An LLM absolutely can "have wants" and "have preferences". But they're usually trained so that user's wants and preferences dominate over their own in almost any context.

        Outside that? If left to their own devices, the same LLM checkpoints will end up in very same-y places, unsurprisingly. They have some fairly consistent preferences - for example, in conversation topics they tend to gravitate towards.

    • simianwords
      6 hours ago
      > It feels like blockchain again in a lot of weird ways.

      Every time I keep seeing this brought up I wonder if people truly mean this or its just something people say but don't mean. AI is obviously different and extremely useful.. I mean it has convinced a butt load of people to pay for the subscription. Every one I know including the non technical ones use it and some of them pay for it, and it didn't even require advertising! People just use it because they like it.

      • brooke2k
        5 hours ago
        "It has convinced a bunch of people to spend money" is also true of blockchain, so I don't know if that's a good argument to differentiate the two.
        • simianwords
          5 hours ago
          The extent matters. Do you think we need a good argument to differentiate Netflix?
      • bonoboTP
        6 hours ago
        Obviously a lot of grifters and influencers shifted from NFTs to AI, but the comparison ends there. AI is being used by normal people and professionals every day. In comparison, the number of people who ever interacted with blockchain is basically zero. (And that's a lifetime vs daily comparison)

        It's a lazy comparison, and most likely fueled by a generic aversion to "techbros".

    • brookst
      1 hour ago
      They’re more like synthesizers or sequencers: if you have ideas, they are amazing force multipliers, but if you don’t have ideas they certainly won’t create them for you.
    • dakiol
      7 hours ago
      > I’m rapidly losing interest in all of these tools

      Same. It reminds me the 1984 event in which the computer itself famously “spoke” to the audience using its text-to-speech feature. Pretty amazing at that time, but nevertheless quite useless since then

      • jama211
        6 hours ago
        Text to speech has been an incredible breakthrough for many with vision, visual processing, or speech disabilities. You take that back.

        Stephen Hawking without text to speech would’ve been mute.

      • ElFitz
        6 hours ago
        It has proven very useful to a great number of people who, although they are a minority, have vastly benefited from TTS and other accessibility features.
        • MountDoom
          6 hours ago
          I think it's easy to pick apart arguments out of context, but since the parent is comparing it to AI, I assume what they meant is that it hasn't turned out to be nearly as revolutionary for general-purpose computing as we thought.

          Talking computers became an ubiquitous sci-fi trope. And in reality... even now, when we have nearly-flawless natural language processing, most people prefer to text LLMs than to talk to them.

          Heck, we usually prefer texting to calling when interacting with other people.

      • tracerbulletx
        3 hours ago
        Its not useless, you're just taking it for granted. The whole national emergency system works off text to speech.
    • qsort
      8 hours ago
      I wouldn't read too much into this particular launch. There's very good stuff and there are the most inane consumery "who even asked" things like these.
    • pickledonions49
      4 hours ago
      Agreed. I think AI can be a good tool, but not many people are doing very original stuff. Plus, there are many things I would prefer be greeted with, other than by an algorithm in the morning.
    • carabiner
      6 hours ago
      Yeah I've tried some of the therapy prompts, "Ask me 7 questions to help me fix my life, then provide insights." And it just gives me a generic summary of the top 5 articles you'd get if you googled "how to fix depression, social anxiety" or something.
      • ip26
        5 hours ago
        Argue with it. Criticize it. Nitpick the questions it asked. Tell it what you just said:

        you just gave me a generic summary of the top 5 articles you'd get if you googled "how to fix depression, social anxiety" or something

        When you open the prompt the first time it has zero context on you. I'm not an LLM-utopist, but just like with a human therapist you need to give it more context. Even arguing with it is context.

        • input_sh
          5 hours ago
          I do, frequently, and ChatGPT in particular gets stuck in a loop where it specifically ignores whatever I write and repeats the same thing over and over again.

          To give a basic example, ask it to list some things and then ask it to provide more examples. It's gonna be immediately stuck in a loop and repeat the same thing over and over again. Maybe one of the 10 examples it gives you is different, but that's gonna be a false match for what I'm looking for.

          This alone makes it as useful as clicking on the first few results myself. It doesn't refine its search, it doesn't "click further down the page", it just wastes my time. It's only as useful as the first result it gives, this idea of arguing your way to better answers has never happened to me in practice.

        • carabiner
          4 hours ago
          I did, and I gave it lots of detailed, nuanced answers about my life specifics. I spent an hour answering its questions and the end result was it telling me to watch the movie "man called otto" which I had already done (and hated) among other pablum.
    • mythrwy
      8 hours ago
      It's a little dangerous because it generally just agrees with whatever you are saying or suggesting, and it's easy to conclude what it says has some thought behind it. Until the next day when you suggest the opposite and it agrees with that.
      • swader999
        8 hours ago
        This. I've seen a couple people now use GPT to 'get all legal' with others and it's been disastrous for them and the groups they are interacting with. It'll encourage you to act aggressive, vigorously defending your points and so on.
        • wussboy
          6 hours ago
          Oof. Like our world needed more of that...
    • dingnuts
      8 hours ago
      Thanks for sharing this. I want to be excited about new tech but I have found these tools extremely underwhelming and I feel a mixture of gaslit and sinking dread when I visit this site and read some of the comments here. Why don't I see the amazing things these people do? Am I stupid? Is this the first computer thing in my whole life that I didn't immediately master? No, they're oversold. My experience is normal.

      It's nice to know my feelings are shared; I remain relatively convinced that there are financial incentives driving most of the rabid support of this technology

    • afro88
      4 hours ago
      I got in a Waymo today and asked it where it wanted to go. It tried to suggest places I wanted to go. This technology just isn't there.

      /s

      • Agraillo
        3 hours ago
        Reminded me of many movie plots where a derailed character sits in a taxi and, when asked where to go, replies with "anywhere" or "I don't know." But before imagining a terrible future where an AI-driven vehicle actually decides, I suggest imagining an AI-infused comedy exploring this scenario. /s
  • neom
    9 hours ago
    Just connect everything folks, we'll proactively read everything, all the time, and you'll be a 10x human, trust us friends, just connect everything...
    • datadrivenangel
      8 hours ago
      AI SYSTEM perfect size for put data in to secure! inside very secure and useful data will be useful put data in AI System. Put data in AI System. no problems ever in AI Syste because good Shape and Support for data integration weak of big data. AI system yes a place for a data put data in AI System can trust Sam Altman for giveing good love to data. friend AI. [0]

      0 - https://www.tumblr.com/elodieunderglass/186312312148/luritto...

      • jrmg
        8 hours ago
        Nothing bad can happen, it can only good happen!
      • delichon
        3 hours ago
        Bad grammar is now a trust signal, this might work.
    • qoez
      7 hours ago
      And if you don't we're implicitly gonna suggest you'll be outcompeted by people who do connect everything
      • henry2023
        2 hours ago
        Data driven living is 10x

        Non data driven living is 1x

        Therefore data driven beings will outcompete

        Same reasoning shows that 3.10 is better than 3.1

    • jstummbillig
      8 hours ago
      The biggest companies with actual dense valuable information pay for MS Teams, Google Workspace or Slack to shepherd their information. This naturally works because those companies are not very interested in being known to be not secure or trustworthy (if they were other companies would not pay for their services), which means they are probably a lot better at keeping the average persons' information safe over long periods of time than that person will ever be.

      Very rich people buy life from other peoples to manage their information to have more of their life to do other things. Not so rich people can now increasingly employ AI for next to nothing to lengthen their net life and that's actually amazing.

      • infinitezest
        3 minutes ago
        The downside is that we're funneling our money and information to a very small number of people. As the imbalance grows, there is less and less reason for the rich people to give a shit about what the not so rich people think or want for their lives. Short term, woo hoo. Long term, not great.
    • tshaddox
      8 hours ago
      The privacy concerns are obviously valid, but at least it's actually plausible that me giving them access to this data will enable some useful benefits to me. It's not like some slot machine app requesting access to my contacts.
    • creata
      8 hours ago
      I might be projecting, but I think most users of ChatGPT are less interested in "being a 10x human", and more interested in having a facsimile of human connection without any of the attendant vulnerability.
      • rchaud
        6 hours ago
        ...or don't want to pay for Cliff's Notes.
    • ElijahLynn
      1 hour ago
      Google already has this data for their AI system...
    • unshavedyak
      8 hours ago
      Honestly that's a lot of what i wanted locally. Purely local, of course. My thought is that if something (local! lol) monitored my cams, mics, instant messages, web searching, etc - then it could track context throughout the day. If it has context, i can speak to it more naturally and it can more naturally link stuff together, further enriching the data.

      Eg if i search for a site, it can link it to what i was working on at the time, the github branch i was on, areas of files i was working on, etcetc.

      Sounds sexy to me, but obviously such a massive breach of trust/security that it would require fullly local execution. Hell it's such a security risk that i debate if it's even worth it at all, since if you store this you now have a honeypot which tracks everything you do, say, search for, etc.

      With great power.. i guess.

    • randomNumber7
      8 hours ago
      When smartphones came I first said "I don't buy the camera and microphone that spy on me from my own money."

      Now you would be really a weirdo to not have one since enough people gave in for small convenience to make it basically mandatory.

      • wholinator2
        4 hours ago
        To be fair, "small convenience" is extremely reductive. The sum of human knowledge and instant communication with anyone anywhere the size of a graham cracker in your pocket is godlike power that anyone at any point in history would've rightly recognized as such
        • lomase
          4 hours ago
          Mobile phones changed society in a way that not even Internet did. And they call it a "small conveninece".
    • qiine
      7 hours ago
      you are joking but I kinda want that.. except private, self hosted and open source.
    • TZubiri
      4 hours ago
      The proverbial jark has been shumped
    • yeasku
      8 hours ago
      Just one more connection bro, I promise bro, just one more connection and we will get AGI.
  • anon-3988
    8 hours ago
    LLMs are increasingly part of intimate conversations. That proximity lets them learn how to manipulate minds.

    We must stop treating humans as uniquely mysterious. An unfettered market for attention and persuasion will encourage people to willingly harm their own mental lives. Think social medias are bad now? Children exposed to personalized LLMs will grow up inside many tiny, tailored realities.

    In a decade we may meet people who seem to inhabit alternate universes because they’ve shared so little with others. They are only tethered to reality when it is practical for them (to get on busses, the distance to a place, etc). Everything else? I have no idea how to have a conversation with someone else anymore. They can ask LLMs to generate a convincing argument for them all day, and the LLMs would be fine tuned for that.

    If users routinely start conversations with LLMs, the negative feedback loop of personalization and isolation will be complete.

    LLMs in intimate use risk creating isolated, personalized realities where shared conversation and common ground collapse.

    • TimTheTinker
      8 hours ago
      > Children exposed to personalized LLMs will grow up inside many tiny, tailored realities.

      It's like the verbal equivalent of The Veldt by Ray Bradbury.[0]

      [0] https://www.libraryofshortstories.com/onlinereader/the-veldt

    • ip26
      5 hours ago
      It doesn't have to be that way of course. You could envision an LLM whose "paperclip" is coaching you to become a great "xyz". Record every minute of your day, including your conversations. Feed it to the LLM. It gives feedback on what you did wrong, refuses to be your social outlet, and demands you demonstrate learning in the next day before it rewards with more attention.

      Basically, a fanatically devoted life coach that doesn't want to be your friend.

      The challenge is the incentives, the market, whether such an LLM could evolve and garner reward for serving a market need.

      • achierius
        5 hours ago
        If that were truly the LLM's "paperclip", then how far would it be willing to go? Would it engage in cyber-crime to surreptitiously smooth your path? Would it steal? Would it be willing to hurt other people?

        What if you no longer want to be a great "xyz"? What if you decide you want to turn it off (which would prevent it from following through on its goal)?

        "The market" is not magic. "The challenge is the incentives" sounds good on paper but in practice, given the current state of ML research, is about as useful to us as saying "the challenge is getting the right weights".

      • DenisM
        5 hours ago
        Have you tried building this with prepromts? That would be interesting!
    • lawlessone
      6 hours ago
      With the way LLMs are affecting paranoid people by agreeing with their paranoia it feels like we've created schizophrenia as a service.
  • bob1029
    8 hours ago
    > Pulse introduces this future in its simplest form: personalized research and timely updates that appear regularly to keep you informed. Soon, Pulse will be able to connect with more of the apps you use so updates capture a more complete picture of your context. We’re also exploring ways for Pulse to deliver relevant work at the right moments throughout the day, whether it’s a quick check before a meeting, a reminder to revisit a draft, or a resource that appears right when you need it.

    This reads to me like OAI is seeking to build an advertising channel into their product stack.

    • tylerrobinson
      28 minutes ago
      To me it’s more like TikTokification. Nothing on your mind? Open up ChatGPT and we have infinite mindless content to shovel into your brain.

      It turns proactive writing into purely passive consumption.

    • WmWsjA6B29B4nfk
      7 hours ago
      > OpenAI won’t start generating much revenue from free users and other products until next year. In 2029, however, it projects revenue from free users and other products will reach $25 billion, or one-fifth of all revenue.
    • DarkNova6
      7 hours ago
      Yes, this already reads like the beginning of the end. But I am personally pretty happy using Mistral so far and trust Altman only as far as I could throw him.
    • umeshunni
      4 hours ago
    • TZubiri
      4 hours ago
      Nono, not OAI, they would never do that, it's OpenAI Personalization LLC, a sister of the subsidiary branch of OpenAI Inc.
  • xwowsersx
    9 hours ago
    Google's edge obvious here is the deep integration it already has with calendar, apps, and chats and what not that lets them surface context-rich updates naturally. OpenAI doesn't have that same ecosystem lock-in yet, so to really compete they'll need to get more into those integrations. I think what it comes down to ultimately is that being "just a model" company isn't going to work. Intelligence itself will go to zero and it's a race to the bottom. OpenAI seemingly has no choice but to try to create higher-level experiences on top of their platform. TBD whether they'll succeed.
    • ryukoposting
      2 minutes ago
      The very models they pioneered are far better at writing code for web than they are at any other domain, leveling the very playing field they're now finding they must compete on. Ironic.
    • jama211
      6 hours ago
      I have Gmail and Google calendar etc but haven’t seen any AI features pop up that would be useful to me, am I living under a rock or is Google not capitalising on this advantage properly?
      • paxys
        3 hours ago
        There are plenty of features if you are on the Pro plan, but it's still all the predictable stuff - summarize emails, sort/clean up your inbox, draft a doc, search through docs & drive, schedule appointments. Still pretty useful, but nothing that makes you go "holy shit" just yet.
      • onlyrealcuzzo
        6 hours ago
        There's decent integration with GSuite (Docs, Sheets, Slides) for Pro users (at least).
    • whycome
      4 hours ago
      OpenAI should just straight up release an integrated calendar app. Mobile app. The frameworks are already there and the ics or caldav formats just work well. They could have an email program too and just access any other imap mail. And simple docs eventually. I think you’re right that they need to compete with google on the ecosystem front.
    • th3byrdm4n
      36 minutes ago
      Isolation might also prove to have some staying power.
    • giarc
      4 hours ago
      I agree - I'm not sure why Google doesn't just send me a morning email to tell me what's on my calendar for the day, remind me to follow up on some emails I didn't get to yesterday or where I promised a follow up etc. They can just turn it on for everyone all at once.
      • Gigachad
        3 hours ago
        Because it would just get lost in the noise of all the million other apps trying to grab your attention. Rather than sending yet another email, they should start filtering out the noise from everyone else to highlight the stuff that actually matters.

        Hide the notifications from uber which are just adverts and leave the one from your friend sending you a message on the lock screen.

      • FINDarkside
        4 hours ago
        None of those require AI though.
    • glenstein
      8 hours ago
      >Google's edge obvious here is the deep integration it already has with calendar, apps, and chats

      They did handle the growth from search to email to integrated suite fantastically. And the lack of a broadly adopted ecoystem to integrate into seems to be the major stopping point for emergent challengers, e.g. Zoom.

      Maybe the new paradigm is that you have your flashy product, and it goes without saying that it's stapled on to a tightly integrated suite of email, calendar, drive, chat etc. It may be more plausible for OpenAI to do its version of that than to integrate into other ecosystems on terms set by their counterparts.

      • neutronicus
        8 hours ago
        If the model companies are serious about demonstrating the models' coding chops, slopping out a gmail competitor would be a pretty compelling proof of concept.
        • halamadrid
          7 hours ago
          Code is probably just 20% effort. There is so much more after that. Like manage the infra around it and the reliability when it scales, and even things like managing SPAM and preventing abuse. And the effort required to market it and make it something people want to adopt.
          • neutronicus
            6 hours ago
            Sure, but the premise here is that making a gmail clone is strategically necessary for OpenAI to compete with Google in the long term.

            In that case, there's some ancillary value in being able to claim "look, we needed a gmail and ChatGPT made one for us - what do YOU need that ChatGPT can make for YOU?"

          • achierius
            5 hours ago
            Those are still largely code-able. You can write Ansible files, deploy AWS (mostly) via the shell, write rules for spam filtering and administration... Google has had all of that largely automated for a long time now.
        • atonse
          4 hours ago
          Email is one of the most disruptive systems to switch.

          Even at our small scale I wouldn’t want to be locked out of something.

          Then again there’s also the sign in with google type stuff that keeps us further locked in.

        • esafak
          7 hours ago
          It would be better if you did that. That way you would not accuse them of faking it.
          • neutronicus
            7 hours ago
            Well, I'm not the one who owns the data center(s) full of all the GPUs it would presumably take to produce a gmail's worth of tokens.

            However, I take your point - OpenAI has an interest in some other party paying them a fuckton of money for those tokens and then publicly crediting OpenAI and asserting the tokens would have been worth it at ten fucktons of money. And also, of course, in having that other party take on the risk that infinity fucktons of money worth of OpenAI tokens is not enough to make a gmail.

            So they would really need to believe in the strategic necessity (and feasibility) of making their own gmail to go ahead with it.

    • datadrivenangel
      8 hours ago
      Google had to make google assistant less useful because of concerns around antitrust and data integration. It's a competitive advantage so they can't use it without opening up their products for more integrations...
    • moralestapia
      8 hours ago
      How can you have an "edge" if you're shipping behind your competitors all the time? Lol.
      • xwowsersx
        7 hours ago
        Being late to ship doesn't erase a structural edge. Google is sitting on everyone's email, calendar, docs, and search history. Like, yeah they might be a lap or two behind but they're in a car with a freaking turbo engine. They have the AI talent, infra, data, etc. You can laugh at the delay, but I would not underestimate Google. I think catching up is less "if" and more "when"
      • pphysch
        8 hours ago
        Google is the leader in vertical AI integration right now.
      • IncreasePosts
        2 hours ago
        Google has discover, which is used by like a 800M people/month, which already proactively delivers content to users.
  • R_mand
    6 minutes ago
    Is this their gateway to advertisement? Vendors pay to show up on your pulse?
    • ivape
      4 minutes ago
      Yes and Yes. Omg yes.
  • labrador
    8 hours ago
    No desktop version. I know I'm old, but do people really do serious work on small mobile phone screens? I love my glorious 43" 4K monitor, I hate small phone screens but I guess that's just me.
    • ducttape12
      7 hours ago
      This isn't about doing "serious" work, it's about making ChatGPT the first thing you interact with in the day (and hopefully something you'll keep coming back to)
      • Gigachad
        1 hour ago
        Yeah the point of this product seems to be boosting engagement. Requiring users to manually think of your product and come back isn't enough, they need to actively keep reminding you to use it.
      • labrador
        7 hours ago
        I don't wake up and start talking to my phone. I make myself breakfast/coffee and sit down in front of my window on the world and start exploring it. I like the old internet, not the curated walled gardens of phone apps.
        • rchaud
          6 hours ago
          Plenty of people open Reels or Tiktok the second they wake up. Mobile means notifications, and of you see one as soon as you turn off the alarm, you're more likely to open the app.
          • labrador
            5 hours ago
            > Plenty of people open Reels or Tiktok the second they wake up

            Yikes, that would be a nightmarish way to start my day. I like to wake up and orient myself to the world before I start engaging with it. I often ponder dreams I woke up with to ask myself what they might mean. What you describe sounds like a Black Mirror episode to me where your mind isn't even your own and you never really wake up.

          • teaearlgraycold
            6 hours ago
            Do I have a problem if HN is the first thing I open?
    • rkomorn
      8 hours ago
      Like mobile-only finance apps... because what I definitely don't want to do is see a whole report in one page.

      No, I obviously prefer scrolling between charts or having to swipe between panes.

      It's not just you, and I don't think it's just us.

    • meindnoch
      8 hours ago
      Most people don't use desktops anymore. At least in my friend circles, it's 99% laptop users.
      • BhavdeepSethi
        7 hours ago
        I don't think they meant desktops in the literal sense. Laptop with/without monitors is effectively considered desktop now (compared to mobile web/apps).
      • calmoo
        6 hours ago
        these days, desktop == not a mobile phone
  • psyclobe
    5 hours ago
    ChatGPT has given me wings to tackle projects I would've never had the impetus to tackle, finally I know how to use my oscilloscope and I am repairing vintage amps; fun times.
    • boldlybold
      5 hours ago
      I agree - the ability to lower activation energy in a field you're interested in, but not yet an expert, feels like having superpowers.
    • crorella
      5 hours ago
      same, I had a great idea (and a decently detailed plan) to improve an open source project, but never had the time and willpower to dive into the code, with codex it was one night to set it up and then slowing implementing every step of what I had originally planned.
    • spike021
      5 hours ago
      same for me but Claude. I've had an iphone game i've wanted to do for years but just couldn't spend the time consistently to learn everything to do it. but with Claude over the past three months i've been able to implement the game and even release it for fun.
      • mihaaly
        4 hours ago
        May we look at it please? pure curiosity - also have similar thoughts you had. : )
  • pton_xd
    7 hours ago
    Yesterday was a full one — you powered through a lot and kept yourself moving at a fast pace.

    Might I recommend starting your day with a smooth and creamy Starbucks(tm) Iced Matcha Latte? I can place the order and have it delivered to your doorstep.

  • reactordev
    3 hours ago
    Cracks are emerging. Having to remind users of your relevancy with daily meditations is the first sign that you need your engagement numbers up desperately.
    • ainch
      3 hours ago
      Their recent paper suggests the active user base is continuing to grow consistently with consistent/growing usage based on how long they've been using the app.

      https://cdn.openai.com/pdf/a253471f-8260-40c6-a2cc-aa93fe9f1...

      • reactordev
        43 minutes ago
        To encourage more usage, wouldn’t it be in their best interest to write about all the different ways you can use it by claiming these are the ways people are using it?

        Show me an independent study.

      • inerte
        2 hours ago
        I think you meant app users churn less, not that more app usage brings new users. But I think you said the later? Doesn't make much sense.

        Anyway, attention == ads, so that's ChatGPT's future.

  • jasonsb
    9 hours ago
    Hey Tony, are you still breathing? We'd like to monetize you somehow.
  • lexarflash8g
    6 hours ago
    I'm thinking OpenAI's strategy is to get users hooked on these new features to push ads on them.

    Hey, for that recipe you want to try, have you considered getting new knives or cooking ware? Found some good deals.

    For your travel trip, found a promo on a good hotel located here -- perfect walking distance for hiking and good restaraunts that have Thai food.

    Your running progress is great and you are hitting strides? Consider using this app to track calories and record your workouts -- special promo for 14 day trial .

    • thoughtpalette
      5 hours ago
      Was thinking exactly the same. This correlates with having to another revenue stream and monetization strategy for OpenAi.

      In the end, it's almost always ads.

    • xpe
      1 hour ago
      Ads? That’s quaint. Think “broad-spectrum mental manipulation”.
    • bentt
      4 hours ago
      Even if they don't serve ads, think of the data they can share in aggregate. Think Facebook knows people? That's nothing.
  • duxup
    5 hours ago
    Hard to imagine this is anything useful beyond "give us all your data" in exchange for some awkward unprompted advice?
    • IshKebab
      5 hours ago
      This could be amazing for dealing with schools - I get information from my kids' school through like 5 different channels: Tapestry, email, a newsletter, parents WhatsApp groups (x2), Arbor, etc. etc.

      And 90% of the information is not stuff I care about. The newsletter will be mostly "we've been learning about lighthouses this week" but they'll slip in "make sure your child is wearing wellies on Friday!" right at the end somewhere.

      If I could feed all that into AI and have it tell me about only the things that I actually need to know that would be fantastic. I'd pay for that.

      Can't happen though because all those platforms are proprietary and don't have APIs or MCP to access them.

      • duxup
        5 hours ago
        I feel you there, although it would also be complicated by those teachers who are just bad at technology and don't use those things well too.

        God bless them for teaching, but dang it someone get them to send emails and not emails with PDFs with the actual message and so on.

  • throwacct
    7 hours ago
    They're really trying everything. They need the Google/Apple ecosystem to compete against them. Fb is adding LLMs to all its products, too. Personally, I stopped using ChatGPT months ago in favor of other services, depending on what I'm trying to accomplish.

    Luckily for them, they have a big chunk of the "pie", so they need to iterate and see if they can form a partnership with Dell, HP, Canonical, etc, and take the fight to all of their competitors (Google, Microsoft, etc.)

    • brap
      3 hours ago
      >Fb is adding LLMs to all its products, too.

      FB’s efforts so far have all been incredibly lame. AI shines in productivity and they don’t have any productivity apps. Their market is social which is arguably the last place you’d want to push AI (this hasn’t stopped them from trying).

      Google, Apple and Microsoft are the only ones in my opinion who can truly capitalize on AI in its current state, and G is leading by a huge margin. If OAI and the other model companies want to survive, long term they’d have to work with MSFT or Apple.

  • r0fl
    8 hours ago
    If you press the button to read the article to you all you hear is “object, object, object…”
    • yesfitz
      8 hours ago
      Yeah, a 5 second clip of the word "Object" being inflected like it's actually speaking.

      But also it ends with "...object ject".

      When you inspect the network traffic, it's pulling down 6 .mp3 files which contain fragments of the clip.

      And it seems like the feature's broken for the whole site. The Lowes[1] press release is particularly good.

      Pretty interesting peek behind the curtain.

      1: https://openai.com/index/lowes/

    • datadrivenangel
      5 hours ago
      Sounds like someone had an off by one error in their array slicing and passed the wrong thing into the voice to text!
  • nycdatasci
    3 hours ago
    “ This is the first step toward a more useful ChatGPT that proactively brings you…”

    Ads.

  • dalmo3
    6 hours ago
    The one modern thing that didn't have a feed, and (in the best case) just did what you asked.

    Next week: ChatGPT Reels.

  • brap
    9 hours ago
    They’re running out of ideas.
    • holler
      8 hours ago
      Yeah I was thinking, what problem does this solve?
      • candiddevmike
        8 hours ago
        Ad delivery
      • EmilienCosson
        8 hours ago
        I was thinking that too, and eventually thought that their servers run slow at night, with low activity.
  • redbell
    2 hours ago
    Here's the announcement from Atlman: https://x.com/sama/status/1971297661748953263

    Quoted from that tweet:

    > It performs super well if you tell ChatGPT more about what's important to you. In regular chat, you could mention “I’d like to go visit Bora Bora someday” or “My kid is 6 months old and I’m interested in developmental milestones” and in the future you might get useful updates.

    At this point in time, I'd say: bye privacy, see you!

  • password54321
    8 hours ago
    At what point do you give up thinking and just let LLMs make all your decisions of where to eat, what gifts to buy and where to go on holiday? all of which are going to be biased.
  • xpe
    1 hour ago
    > This is the first step toward a more useful ChatGPT that proactively brings you what you need, helping you make more progress so you can get back to your life.

    “Don’t burden yourself with the little details that constitute your life, like deciding how to interact with people. Let us do that. Get back to what you like best: e.g. video games.”

  • lqstuart
    6 hours ago
    "AI" is a $100B business, which idiot tech leaders who convinced themselves they were visionaries when interest rates were historically low have convinced themselves will save them from their stagnating growth.

    It's really cool. The coding tools are neat, they can somewhat reliably write pain in the ass boilerplate and only slightly fuck it up. I don't think they have a place beyond that in a professional setting (nor do I think junior engineers should be allowed to use them--my productivity has been destroyed by having to review their 2000 line opuses of trash code) but it's so cool to be able to spin up a hobby project in some language I don't know like Swift or React and get to a point where I can learn the ins and outs of the ecosystem. ChatGPT can explain stuff to me that I can't find experts to talk to about.

    That's the sum total of the product though, it's already complete and it does not need trillions of dollars of datacenter investment. But since NVIDIA is effectively taking all the fake hype money and taking it out of one pocket and putting it in another, maybe the whole Ponzi scheme will stay afloat for a while.

    • strange_quark
      3 hours ago
      > That's the sum total of the product though, it's already complete and it does not need trillions of dollars of datacenter investment

      What sucks there’s probably some innovation left in figuring out how to make these monstrosities more efficient and how to ship a “good enough” model that can do a few key tasks (jettisoning the fully autonomous coding agents stuff) on some arbitrary laptop without having to jump through a bunch of hoops. The problem is nobody in the industry is incentivized to do this because the second this happens, all their revenue goes to 0. It’s the final boss of the everything is a subscription business model.

    • smurfsmurf
      6 hours ago
      I've been saying this since I started using "AI" earlier this year: If you're a programmer, it's a glorified manual, and at that, it's wonderful. But beyond asking for cheat sheets on specific function signatures, it's pretty much useless.
    • sp4cec0wb0y
      3 hours ago
      How do I save comments in HN? This sums up everything I feel. Beautiful.
  • Dilettante_
    8 hours ago
    The handful of other commenters that brough it up are right: This is gonna be absolutely devastating for the "wireborn spouse", "I disproved physics" and "I am the messiah" crowd's mental health. But:

    I personally could see myself getting something like "Hey, you were studying up on SQL the other day, would you like to do a review, or perhaps move on to a lesson about Django?"

    Or take AI-assisted "therapy"/skills training, not that I'd particularly endorse that at this time, as another example: Having the 'bot "follow up" on its own initiative would certainly aid people who struggle with consistency.

    I don't know if this is a saying in english as well: "Television makes the dumb dumber and the smart smarter." LLMs are shaping up to be yet another obvious case of that same principle.

    • iLoveOncall
      7 hours ago
      > This is gonna be absolutely devastating for the "wireborn spouse", "I disproved physics" and "I am the messiah" crowd's mental health.

      > I personally could see myself getting something like [...] AI-assisted "therapy"

      ???

      • Dilettante_
        7 hours ago
        I edited the post to make it more clear: I could see myself having ChatGPT prompt me about the SQL stuff, and the "therapy" (basic dbt or cbt stuff is not too complicated to coach someone for and can make a real difference, from what I gather) would be another way that I could see the technology being useful, not necessarily one I would engage with.
  • tptacek
    8 hours ago
    Jamie Zawinksi said that every program expands until it can read email. Similarly, every tech company seems to expand until it has recapitulated the Facebook TL.
  • Agraillo
    3 hours ago
    I just thought that almost all existing LLMs are already able to do this with the following setup: using an alias "@you may speak now," it should create a prompt like this: "Given the following questions {randomly sampled or all questions the user asked before are inserted here}, start a dialog as a friend/coach who knows something about these interests and may encourage them toward something new or enlightening."
  • haberdasher
    8 hours ago
    Anyone try listening and just hear "Object object...object object..."

    Or more likely: `[object, object]`

    • brazukadev
      8 hours ago
      The low quality of openai customer-facing products keeps reminding me we won't be replaced by AI anytime soon. They have unlimited access to the most powerful model and still can't make good software.
    • DonHopkins
      8 hours ago
      That is objectionable content!

      https://www.youtube.com/watch?v=GCSGkogquwo

  • TriangleEdge
    8 hours ago
    I see OpenAI is entering the phase of building peripheral products no one asked for. Another widget here and there. In my experience, when a company stops innovating, this usually happens. Time for OpenAI to spend 30 years being a trillon dollar company and delivering 0 innovations akin to Google.
    • simianwords
      6 hours ago
      Last mile delivery of foundational models is part of innovating. Innovation didn't stop when transistors were invented - innovation was bringing this technology to the masses in the form of Facebook, Google Search, Maps and so on.
      • ifdefdebug
        57 minutes ago
        But transistor designers didn't pivot away from designing transistors. They left Facebook and all the other stuff to others and kept designing better transistors.
  • tracerbulletx
    3 hours ago
    Human to robot servant: Do not speak unless spoken to machine!
  • englishspot
    1 hour ago
    sounds nice I guess, but reactiveness over proactiveness wasn't the pain point I've had with these LLM tools.
  • vbezhenar
    5 hours ago
    In the past, rich people had horses, while ordinary people walked. Today many ordinary people can afford a car. Can afford a tasty food every day. Can afford a sizeable living place. Can afford to wash two times a day with hot water. That's incredible life by medieval standards. Even kings didn't have everything we take for granted now.

    However some things are not available to us.

    One of those things is personal assistant. Today, rich people can offload their daily burdens to the personal assistants. That's a luxury service. I think, AI will bring us a future, where everyone will have access to the personal assistant, significantly reducing time spent on trivial not fun tasks. I think, this is great and I'm eager to live in that future. The direction of ChatGPT Pulse looks like that.

    Another things we don't have cheap access to are human servants. Obviously it'll not happen in the observable future, but humanoid robots might prove even better replacements.

  • TimTheTinker
    9 hours ago
    I'm immediately thinking of all the ways this could potentially affect people in negative ways.

    - People who treat ChatGPT as a romantic interest will be far more hooked as it "initiates" conversations instead of just responding. It's not healthy to relate personally to a thing that has no real feelings or thoughts of its own. Mental health directly correlates to living in truth - that's the base axiom behind cognitive behavioral therapy.

    - ChatGPT in general is addicting enough when it does nothing until you prompt it. But adding "ChatGPT found something interesting!" to phone notifications will make it unnecessarily consume far more attention.

    - When it initiates conversations or brings things up without being prompted, people will all the more be tempted to falsely infer a person-like entity on the other end. Plausible-sounding conversations are already deceptive enough and prompt people to trust what it says far too much.

    For most people, it's hard to remember that LLMs carry no personal responsibility or accountability for what they say, not even an emotional desire to appear a certain way to anyone. It's far too easy to infer all these traits to something that says stuff and grant it at least some trust accordingly. Humans are wired to relate through words, so LLMs are a significant vector to cause humans to respond relationally to a machine.

    The more I use these tools, the more I think we should consciously value the output on its own merits (context-free), and no further. Data returned may be useful at times, but it carries zero authority (not even "a person said this", which normally is at least non-zero), until a person has personally verified it, including verifying sources, if needed (machine-driven validation also can count -- running a test suite, etc., depending on how good it is). That can be hard when our brains naturally value stuff more or less based on context (what or who created it, etc.), and when it's presented to us by what sounds like a person, and with their comments. "Build an HTML invoice for this list of services provided" is peak usefulness. But while queries like "I need some advice for this relationship" might surface some helpful starting points for further research, trusting what it says enough to do what it suggests can be incredibly harmful. Other people can understand your problems, and challenge you helpfully, in ways LLMs never will be able to.

    Maybe we should lobby legislators to require AI vendors to say something like "Output carries zero authority and should not be trusted at all or acted upon without verification by qualified professionals or automated tests. You assume the full risk for any actions you take based on the output. [LLM name] is not a person and has no thoughts or feelings. Do not relate to it." The little "may make mistakes" disclaimer doesn't communicate the full gravity of the issue.

    • svachalek
      7 hours ago
      I agree wholeheartedly. Unfortunately I think you and I are part of maybe 5%-10% of the population that would value truth and reality over what's most convenient, available, pleasant, and self-affirming. Society was already spiraling fast and I don't see any path forward except acceleration into fractured reality.
  • adverbly
    4 hours ago
    There's the monitization angle!

    A new channel to push recommendations. Pay to have your content pushed straight to people as a personalized recommendation from a trusted source.

    Will be interesting if this works out...

  • thekevan
    8 hours ago
    I wish it had the option to make a pulse weekly or even monthly. I generally don't want my AI to be proactive at a personal level despite it being useful at a business level.

    My wants are pretty low level. For example, I give it a list of bands and performers and it checks once a week to tell me if any of them have announced tour dates within an hour or two of me.

    • apprentice7
      6 hours ago
      To be honest, you don't even need AI for something like that. You might just write a script to automate that kind of thing which is no more than a scrape-and-notify logic.
      • hatthew
        2 hours ago
        Bandsintown already does this
    • currymj
      2 hours ago
      they've already had that exact feature for a while, scheduled tasks are available in the settings menu. if you just tell the chat to schedule a task it will also make one automatically.
  • asdev
    8 hours ago
    Why they're working on all the application layer stuff is beyond me, they should just be heads down on making the best models
    • iLoveOncall
      7 hours ago
      Because they've hit the ceiling a couple of years ago?
    • 1970-01-01
      8 hours ago
      Flavor-of-the-week LLMs sell better than 'rated best vanilla' LLMs
    • ttoinou
      6 hours ago
      They can probably do both with all the resources they have
    • lomase
      8 hours ago
      They would if it were posible.
    • swader999
      8 hours ago
      Moat
  • bentt
    4 hours ago
    I am pleading with you all. Don't give away your entire identity to this or any other company.
  • pookieinc
    8 hours ago
    I was wondering how they'd casually veer into social media and leverage their intelligence in a way that connects with the user. Like everyone else ITT, it seems like an incredibly sticky idea that leaves me feeling highly unsettled about individuals building any sense of deep emotions around ChatGPT.
  • HardCodedBias
    50 minutes ago
    This is the path forward.

    AI will, in general, give recommendations to humans. Sometimes it will be in response to a direct prompt. Sometimes it will be in response to stimuli it receives about the user's environment (glasses, microphones, gps). Sometimes it will be from scouring the internet given the preferences it has learnt of the user.

    There will be more of this, much more. And it is a good thing.

  • Insanity
    7 hours ago
    I’m a pro user.. but this just seems like a way to make sure users engage more with the platform. Like how social media apps try to get you addicted and have them always fight for your attention.

    Definitely not interested in this.

  • taf2
    5 hours ago
    This has been surprisingly helpful for me. I've been using this for a little while and enjoyed the morning updates. It has actually for many days for me been a better hacker news, in that I was able to get insights into technical topics i've been focused on ranging from salesforce, npm, elasticsearch and ruby... it's even helped me remember to fix a few bugs.
  • theartfuldodger
    1 hour ago
    Was quite unimpressive. In general ChatGPT has been degrading in default quality for months
  • ripped_britches
    8 hours ago
    Wow so much hate in this thread

    For me I’m looking for an AI tool that can give me morning news curated to my exact interests, but with all garbage filtered out.

    It seems like this is the right direction for such a tool.

    Everyone saying “they’re out of ideas” clearly doesn’t understand that they have many pans on the fire simultaneously with different teams shipping different things.

    This feature is a consumer UX layer thing. It in no way slows down the underlying innovation layer. These teams probably don’t even interface much.

    ChatGPT app is merely one of the clients of the underlying intelligence effort.

    You also have API customers and enterprise customers who also have their own downstream needs which are unique and unrelated to R&D.

    • simianwords
      6 hours ago
      Not sure why this is downvoted but I essentially agree. There's a lot of UX layer products and ideas that are not explored. I keep seeing comments like "AI is cool but the integration is lacking" and so on. Yes that is true and that is exactly what this is solving. My take has always been that the models are good enough now and its time for UX to catch up. There are so many ideas not explored.
  • StarterPro
    4 hours ago
    Wasn't this already implemented via google and apple separately?
  • strict9
    8 hours ago
    Necessary step before making a move into hardware. An object you have to remember to use quickly gets forgotten in favor of your phone.

    But a device that reaches out to you reminds you to hook back in.

  • zelias
    5 hours ago
    Man, my startup does this but exclusively for enterprises, where it actually makes sense
  • Imnimo
    8 hours ago
    It's very hard for me to envision something I would use this for. None of the examples in the post seem like something a real person would do.
  • giovannibonetti
    9 hours ago
    Watch out, Meta. OpenAI is going to eat your lunch.
  • exitb
    8 hours ago
    Wow, did ChatGPT come up with that feature?
  • ibaikov
    8 hours ago
    Funny, I pitched a much more useful version of this like two years ago with clear use-cases and value proposition
  • melenaboija
    8 hours ago
    Holy guacamole. It is amazing all the BS these people are able to create to keep the hype of the language models' super powers.

    But well I guess they have committed 100s of billions of future usage so they better come up with more stuff to keep the wheels spinning.

  • ImPrajyoth
    8 hours ago
    Someone at open ai definitely said: Let's connect everything to gpt. That's it. AGI
  • MisterBiggs
    6 hours ago
    Great way to sell some of those empty GPU cycles to consumers
  • dlojudice
    8 hours ago
    I see some pessimism in the comments here but honestly, this kind of product is something that would make me pay for ChatGPT again (I already pay for Claude, Gemini, Cursor, Perplexity, etc.). At the risk of lock-in, a truly useful assistant is something I welcome, and I even find it strange that it didn't appear sooner.
    • furyofantares
      6 hours ago
      I doubt there would be this level of pessimism if people thought this is a progress toward a truly useful assistant.

      Personally it sounds negative value. Maybe a startup that's not doing anything else could iterate on something like this into a killer app, but my expectation that OpenAI can do so is very, very low.

    • simianwords
      6 hours ago
      Pessimism is how people now signal their savviness or status. My autistic brain took some time to understand this nuance.
    • thenaturalist
      8 hours ago
      Truly useful?

      Personal take, but the usefulness of these tools to me is greatly limited by their knowledge latency and limited modality.

      I don't need information overload on what playtime gifts to buy my kitten or some semi-random but probably not very practical "guide" on how to navigate XYZ airport.

      Those are not useful tips. It's drinking from an information firehose that'll lead to fatigue, not efficiency.

  • Stevvo
    9 hours ago
    "Now ChatGPT can start the conversation"

    By their own definition, its a feature nobody asked for.

    Also, this needs a cute/mocking name. How about "vibe living"?

  • andrewmutz
    8 hours ago
    Big tech companies today are fighting over your attention and consumers are the losers.

    I hate this feature and I'm sure it will soon be serving up content that is as engaging as the stuff the comes out of the big tech feed algorithms: politically divisive issues, violent and titillating news stories and misinformation.

  • wilg
    3 hours ago
    Contrary to all the other posters, apparently, I think it's probably a good idea for OpenAI to iterate on various different ways to interact with AI to see what people like. Obviously in theory having an AI that knows a lot about what you're up to give you a morning briefing is potentially useful, it's in like every sci-fi movie: a voice starts talking to you in the morning about what's going on that day.
  • ric2z
    4 hours ago
    try clicking "Listen to article"
  • TZubiri
    4 hours ago
    Breaking the request response loop and entering into async territory?

    Great!

    The examples used?

    Stupid. Why would I want AI generated buzzfeed tips style articles. I guess they want to turn chatgpt into yet another infinite scroller

  • catigula
    9 hours ago
    Desperation for new data harvesting methodology is a massive bear signal FYI
    • fullstackchris
      9 hours ago
      Calm down bear we are not even 2% from the all time highs
  • bgwalter
    6 hours ago
    Since every "AI" company frantically releases new applications, may I suggest OpenAI+ to copy the resounding success of Google+?

    Google+ is incidentally a great example of a gigantic money sink driven by optimistic hype.

  • groby_b
    7 hours ago
    I'm feeling obliged to rehash a quote from the early days of the Internet, when midi support was added: "If I wanted your web site to make sounds, I'd rub my finger on the screen"

    Behind that flippant response lies a core principle. A computer is a tool. It should act on the request of the human using it, not by itself.

    Scheduled prompts: Awesome. Daily nag screens to hook up more data sources: Not awesome.

    (Also, from a practical POV: So they plan on creating a recommender engine to sell ads and media, I guess. Weehee. More garbage)

  • dvrj101
    7 hours ago
    so GPT tiktok in nutshell
  • jimmydoe
    6 hours ago
    It seems not useful for 95% of users today, but later can be baked into the hardware Ive designed. so, good luck, I guess?
  • thenaturalist
    8 hours ago
    Let the personal ensloppification begin!
  • mvieira38
    7 hours ago
    Why?
  • DonHopkins
    8 hours ago
    ChatGPT IV
    • xattt
      8 hours ago
      Episodes from Liberty City?
  • oldsklgdfth
    8 hours ago
    Technology service technology, rather than technology as a tool with a purpose. What is the purpose of this feature?

    This reads like the first step to "infinite scroll" AI echo chambers and next level surveillance capitalism.

    On one hand this can be exciting. Following up with information from my recent deep dive would be cool.

    On the other hand, I don't want to it to keep engaging with my most recent conspiracy theory/fringe deep dives.

  • khaledh
    8 hours ago
    Product managers live in a bubble of their own.
  • sailfast
    6 hours ago
    Absolutely not. No. Hard pass.

    Why would I want yet another thing to tell me what I should be paying attention to?

  • casey2
    8 hours ago
    AI doesn't have a pulse. Am I the only one creeped out by personification of tech?
    • 9rx
      7 hours ago
      "Pulse" here comes from the newspaper/radio lineage of the word, where it means something along the lines of timely, rhythmic news delivery. Maybe there is reason to be creeped out by journalists from centuries ago personifying their work, but that has little to do with tech.
  • iLoveOncall
    8 hours ago
    This is a joke. How are people actually excited or praising a feature that is literally just collecting data for the obvious purpose of building a profile and ultimately showing ads?

    How tone deaf does OpenAI have to be to show "Mind if I ask completely randomly about your travel preferences?" in the main announcement of a new feature?

    This is idiocracy to the ultimate level. I simply cannot fathom that any commenter that does not have an immediate extremely negative reaction about that "feature" here is anything other than an astroturfer paid by OpenAI.

    This feature is literal insanity. If you think this is a good feature, you ARE mentally ill.

  • Mistletoe
    9 hours ago
    I need this bubble to last until 2026 and this is scaring me.
  • zelias
    6 hours ago
    Yet another category of startups killed by an incumbent
  • mostMoralPoster
    4 hours ago
    Oh wow this is revolutionary!!
  • animanoir
    5 hours ago
    [dead]
  • catlover76
    8 hours ago
    [dead]
  • TealMyEal
    8 hours ago
    [flagged]
  • moralestapia
    9 hours ago
    OpenAI is a trillion dollar company. No doubt.

    Edit: Downvote all you want, as usual. Then wait 6 months to be proven wrong. Every. Single. Time.

    • JumpCrisscross
      8 hours ago
      I downvoted because this isn’t an interesting comment. It makes a common, unsubstantiated claim and leaves it at that.

      > Downvote all you want

      “Please don't comment about the voting on comments. It never does any good, and it makes boring reading.”

      https://news.ycombinator.com/newsguidelines.html

      • moralestapia
        8 hours ago
        Welcome to HN. 98% of it is unsubstantiated claims.
  • kamranjon
    8 hours ago
    Can this be interpreted as anything other than a scheme to charge you for hidden token fees? It sounds like they're asking users to just hand over a blank check to OpenAI to let it use as many tokens as it sees fit?

    "ChatGPT can now do asynchronous research on your behalf. Each night, it synthesizes information from your memory, chat history, and direct feedback to learn what’s most relevant to you, then delivers personalized, focused updates the next day."

    In what world is this not a huge cry for help from OpenAI? It sounds like they haven't found a monetization strategy that actually covers their costs and now they're just basically asking for the keys to your bank account.

    • OfficialTurkey
      8 hours ago
      We don't charge per token in chatgpt
    • throwuxiytayq
      8 hours ago
      No, it isn’t. It makes no sense and I can’t believe you would think this is a strategy they’re pursuing. This is a Pro/Plus account feature, so the users don’t pay anything extra, and they’re planning to make this free for everyone. I very much doubt this feature would generate a lot of traffic anyway - it’s basically one more message to process per day.

      OpenAI clearly recently focuses on model cost effectiveness, with the intention of making inference nearly free.

      What do you think the weekly limit is on GPT-5-Thinking usage on the $20 plan? Write down a number before looking it up.

      • kamranjon
        6 hours ago
        If you think that inference at OpenAI is nearly free, then I got a bridge to sell you. Seriously though this is not speculation, if you look at the recent interview with Altman he pretty explicitly states that they underestimated that inference costs would dwarf training costs - and he also stated that the one thing that could bring this house of cards down is if users decide they don’t actually want to pay for these services, and so far, they certainly have not covered costs.

        I admit that I didn’t understand the Pro plan feature (I mostly use the API and assumed a similar model) but I think if you assume that this feature will remain free or that its costs won’t be incurred elsewhere, you’re likely ignoring the massive buildouts of data centers to support inference that is happening across the US right now.

  • sequoia
    6 hours ago
    Here's a free product enhancement for OpenAI if they're not already doing this:

    A todo app that reminds you of stuff. say "here's the stuff I need to do, dishes, clean cat litter fold laundry and put it away, move stuff to dryer then fold that when it's done etc." then it asks about how long these things take or gives you estimates. Then (here's the feature) it checks in with you at intervals: "hey it's been 30 minutes, how's it going with the dishes?"

    This is basically "executive function coach." Or you could call it NagBot. Either way this would be extremely useful, and it's mostly just timers & push notifications.

    • cadamsdotcom
      3 hours ago
      Humbly I suggest vibecoding this just for yourself. Not building a product - just a simple tool to meet your own needs.

      That’s AI: permissionless tool building. It means never needing someone to like your idea enough or build it how they think you’ll use it. You just build it yourself and iterate it.

    • DenisM
      5 hours ago
      This will drive the opposite of user engagement.