25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)

  • 0 Posts
  • 23 Comments
Joined 8 months ago
cake
Cake day: October 14th, 2024

help-circle
  • Most people don’t care about decentralization

    I think that’s largely not the case for people that are currently on Lemmy/Mastodon, but I think you’re right that it prevents larger adoption. I’m okay with that, though. I don’t need to talk with everyone. There’s room for more growth, probably especially for more niche communities, but at least for me Lemmy has hot critical mass.

    Everything else I either like the things you dislike or disagree that they are problems.



  • MagicShel@lemmy.ziptoTechnology@lemmy.worldAi Code Commits
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    3 days ago

    An LLM providing “an opinion” is not a thing

    Agreed, but can we just use the common parlance? Explaining completions every time is tedious, and most everyone talking about it at this level always knows. It doesn’t think, it doesn’t know anything, but it’s a lot easier to use those words to mean something that seems analogous. But yeah, I’ve been on your side of this conversation before and let’s just read all that as agreed.

    this would not have to reach either a human or an AI agent or anything before getting fixed with little resources

    There are tools that do some of this automatically. I picked really low hanging fruit that I still see every single day in multiple environments. LLMs attempt (wrong word here, I know) more, but they need review and acceptance by a human expert.

    Perfectly decent looking “minor fixes” that are well worded, follow guidelines, and pass all checks, while introducing an off by one error or suddenly decides to swap two parameters that happens to be compatible and make sense in context are the issue. And those, even if rare (empirically I’d say they are not that rare for now) are so much harder to spot without full human analysis, are a real threat.

    I get that folks are trying to fully automate this. That’s fucking stupid. I don’t let seasoned developers commit code to my repos without review, why would I let AI? Incidentally, seasoned developers also can suggest fixes with subtle errors. And sometimes they escape into the code base, or sometimes perfectly good code that worked fine on prem goes to shit in the cloud—I just had to argue my team into fixing something that executed over 10k SQL statements in some cases on a single page load due to lazy loading. That shit worked “great” on prem but was taking up to 90 seconds in the cloud. All written by humans.

    The goal should not be to emulate human mistakes, but to make something better.

    I’m sure that is someone’s goal, but LLMs aren’t going to do that. They are a different tool that helps but does not in any way replace human experts. And I’m caught in the middle of every conversation because I don’t hate them enough for one side, and I’m not hype enough about them for the other. But I’ve been working with them for several years now and watched the grow since GPT2 and I understand them pretty well. Well enough not to trust them to the degree some idiots do, but I still find them really handy.


  • MagicShel@lemmy.ziptoTechnology@lemmy.worldAi Code Commits
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    3
    ·
    4 days ago

    The place I work is actively developing an internal version of this. We already have optional AI PR reviews (they neither approve nor reject, just offer an opinion). As a reviewer, AI is the same as any other. It offers an opinion and you can judge for yourself whether its points need to be addressed or not. I’ll be interested to see whether its comments affect the comments of the tech lead.

    I’ve seen a preview of a system that detects problems like failing sonar analysis and it can offer a PR to fix it. I suppose for simple enough fixes like removing unused imports or unused code it might be fine. It gets static analysis and review like any other PR, so it’s not going to be merging any defects without getting past a human reviewer.

    I don’t know how good any of this shit actually is. I tested the AI review once and it didn’t have a lot to say because it was a really simple PR. It’s a tool. When it does good, fine. When it doesn’t, it probably won’t take any more effort than any other bad input.

    I’m sure you can always find horrific examples, but the question is how common they are and how subtle any introduced bugs are, to get past the developer and a human reviewer. Might depend more on time pressure than anything, like always.




  • I have a set of attributes that I associate with consciousness. We can disagree in part, but if your definition is so broad as to include math formulas there isn’t even common ground for us to discuss them.

    If you want to say contemplation/awareness of self isn’t part of it then I guess I’m not very precious about it the way I would be over a human-like perception of self, then fine people can debate what ethical obligations we have to an ant-like consciousness when we can achieve even that, but we aren’t there yet. LLMs are nothing but a process of transforming input to output. I think consciousness requires rather more than that or we wind up with erosion being considered a candidate for consciousness.

    So I’m not the authority, but if we don’t adhere to some reasonable layman’s definition it quickly gets into weird wankery that I don’t see any value in exploring.





    1. Let’s say we do an algorithm on paper. Can it be conscious? Why is it any different if it’s done on silicon rather than paper?

    2. Because they are capable of fiction. We write stories about sentient AI and those inform responses to our queries.

    I get playing devil’s advocate and it can be useful to contemplate a different perspective. If you genuinely think math can be conscious I guess that’s a fair point, but that would be such a gulf in belief for us to bridge in conversation that I don’t think either of us would profit from exploring that.


  • Consciousness requires contemplation of self. Which requires the ability to contemplate.

    Current AIs function as mainly complex algorithms that are run when invoked. They are 100% not conscious any more than a2+b2=c2 is conscious. AI can simulate the words of a conscious being, but they don’t come from any awareness of internal state, but are a result of the prompt (including injected data and instructions).

    In the future, I’m sure an AI could be designed that spends time thinking about its own existence, but I’m not sure why anyone would pay for all the compute to think about things not directly requested.


  • Anything UI is kinda bullshit because HTML and CSS were never designed to produce pixel-perfect fidelity on every screen but companies insist, and also jank like text shifting just slightly when you hover your mouse over it is bad UX. So what we wind up with is a fifty-level hierarchy of containers making sure everything lined up just so. That complexity is imposed by the intersection of HTML, CSS, and JS. Not that the previous developer wasn’t an idiot, but I freaking hate front end work despite being “full-stack.”



  • it makes for a very good narrative to spin at investors

    Particularly for the investors in AI companies. AI is useful. I use it a lot, but all of this shit they put out about what if AI’s take over the world or how we’re going to have to figure out how to deal with 90% unemployment is science-fiction marketing.

    It’s not going to take over the world. It’s not going to put artists out of work—not once consumers take in the AI-generated results.

    It’s sure as fuck not putting software devs out of work on any kind of scale. It makes me a bit more productive, but not enough to replace a productive co-worker.

    On the other hand, I’ve had team members who would boost overall team productivity by getting fired before LLMs.


  • Surprisingly, the mistakes ChatGPT made weren’t related to picture processing. Every time I’ve sent a picture, it has flawlessly analyzed the text (even if it’s a screenshot of a massive Linux log or a screenshot with multiple windows / arbitrary text placement). The problems were more like the markdown table I created would not be reproduced perfectly with the new changes/additions. It’s pretty reliable early on, but either as the chat gets longer or the table does, fidelity can be lost. Not very often, but it does happen.

    Just to clarify. But I find as long as you’re paying close attention and can catch mistakes or verify the output, AI does make such tasks much less tedious.