• 0 Posts
  • 24 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle

  • Before LLMs people were often saying this about people smarter than the rest of the group.

    Smarter by whose metric? If you can’t write software that meets the bare minimum of comprehensibility, you’re probably not as smart as you think you are.

    Software engineering is an engineering discipline, and conformity is exactly what you want in engineering — because in engineering you don’t call it ‘conformity’, you call it ‘standardization’. Nobody wants to hire a maverick bridge-builder, they wanna hire the guy who follows standards and best practices because that’s how you build a bridge that doesn’t fall down. The engineers who don’t follow standards and who deride others as being too stupid or too conservative to understand their vision are the ones who end up crushed to death by their imploding carbon fiber submarine at the bottom of the Atlantic.

    AI has exactly the same “maverick” tendencies as human developers (because, surprise surprise, it’s trained on human output), and until that gets ironed out, it’s not suitable for writing anything more than the most basic boilerplate — which is stuff you can usually just copy-paste together in five minutes anyway.


  • very_well_lost@lemmy.worldtoTechnology@lemmy.worldWhy LLMs can't really build software
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    17 hours ago

    The company I work for has recently mandated that we must start using AI tools in our workflow and is tracking our usage, so I’ve been experimenting with it a lot lately.

    In my experience, it’s worse than useless when it comes to debugging code. The class of errors that it can solve is generally simple stuff like typos and syntax errors — the sort of thing that a human would solve in 30 seconds by looking at a stack trace. The much more important class of problem, errors in the business logic, it really really sucks at solving.

    For those problems, it very confidently identifies the wrong answer about 95% of the time. And if you’re a dev who’s desperate enough to ask AI for help debugging something, you probably don’t know what’s wrong either, so it won’t be immediately clear if the AI just gave you garbage or if its suggestion has any real merit. So you go check and manually confirm that the LLM is full of shit which costs you time… then you go back to the LLM with more context and ask it to try again. It’s second suggestion will sound even more confident than the first, (“Aha! I see the real cause of the issue now!”) but it will still be nonsense. You go waste more time to rule out the second suggestion, then go back to the AI to scold it for being wrong again.

    Rinse and repeat this cycle enough times until your manager is happy you’ve hit the desired usage metrics, then go open your debugging tool of choice and do the actual work.



  • I may be wrong, but I suspect that any nearby black holes (i.e. within a few dozen light-years) with active accretion disks would already be visible to us in visible light and would also be bright enough in x-ray emissions that prior searches would have uncovered them.

    In my limited googling, the smallest active black hole I could find was A0620-00A, which is about 6 solar masses. Its accretion disk is visible in x-rays from 3000 light-years away, so I assume any small black holes accreting matter anywhere near us would also be visible.

    So more sensitive x-ray instruments would be useful for finding more distant SMBHs, but not necessary for finding any small, nearby black holes that we could actually stand a chance of reaching with a spacecraft. Most likely there just aren’t any active black holes in our neighborhood — only quiet ones we can’t see in x-rays.


  • That’s a pretty good idea, especially when you consider another problem that needs to be solved by any fast-moving spacecraft: dust.

    If a spacecraft hurtling through interstellar space at .3c encounters even a tiny grain of dust, the energy released by the collision is going to be enormous — more than enough to destroy the ship entirely. So far, the best strategy anyone has come up with to mitigate this risk is to just… send a shitload of probes all at once. Basically shotgun blast tiny craft at the sky in hopes that at least one of them makes it to the final destination unscathed.

    I imagine it wouldn’t be too hard to modify this strategy and stagger the launch times somewhat to create more of a ‘caravan’ of probes that could also double as a signal relay.



  • This has been my experience as well, only the company I work for has mandated that we must use AI tools everyday (regardless of whether we want/need them) and is actively tracking our usage to make sure we comply.

    My productivity has plummeted. The tool we use (Cursor) requires so much hand-holding that it’s like having a student dev with me at all times… only a real student would actually absorb information and learn over time, unlike this glorified Markov Chain. If I had a human junior dev, they could be a productive and semi-competent coder in 6 months. But 6 months from now, the LLM is still going to be making all of the same mistakes it is now.

    It’s gotten to the point where I ask the LLM to solve a problem for me just so that I can hit the required usage metrics, but completely ignore its output. And it makes me die a little bit inside every time I consider how much water/energy I’m wasting for literally zero benefit.


  • The proposal here is very similar to the Breakthrough Starshot initiative that wants to send a probe to Alpha Centauri, the nearest star system to our sun.

    Basically the idea is to take a very small (i.e. low mass) craft with a large solar sail and accelerate it to a significant fraction of the speed of light using very powerful ground-based microwave laser arrays. The neat thing about this concept is that all of the technology essentially exists already — it’s just a matter of scaling up existing concepts and miniaturizing existing sensors.

    The black hole rendezvous suggested in this paper is a lot more ambitious than Starshot, targeting a distance of 20-40 light-years (for comparison Alpha Centauri is only about 4 light-years away) and a max speed of 30% the speed of light (vs a speed of ~10% c targeted by Starshot). I think the main problem here (other than the requirement of building what essentially amounts to a microwave death ray) would be developing an antenna that’s both small enough to fit under the strict mass limits and powerful enough to broadcast the data 20+ light-years back to Earth. Maybe space-time lensing effects around the black hole itself could be used to amplify the signal? Another problem is that even at extreme speeds, this is a multi-decade mission — like 70+ years. Considering the travel delay involved in sending back a signal, it’ll be a century at least before any data would arrive at Earth. Unfortunately, century-long projects are an extremely hard sell for the people who hold the scientific purse strings.

    Oh, also… We don’t currently know of any BHs within the target range, and the paper’s author even admits that any targets more distant than 50ish light-years are basically unreachable. The current closest-known BH is more than 1000 light-years away, so we’ve still got a lot of work to do in finding a suitable target. Fortunately the field of black hole detection is advancing quickly, and the new Vera Rubin observatory is very likely to spot many previously-unknown black holes in the coming years. Hopefully some of those will be close!


  • maybe a wide field telescope and a software stack on the ground specifically built to catalog the “wobble” of stars and find invisible binary partners? It would double as an exoplanet detector, and IIRC there are already systems doing this.

    This has indeed already been done! In fact, the closest known black hole to earth was discovered by GAIA, a space telescope that collects this kind of data.

    We’ve also done what are called ‘microlensing surveys’ that look for the effect of spacetime distortion on background stars rather than the wobble of binary partners. Some of these have already found candidate objects over the years, however the new Vera Rubin observatory that’s just come online is expected to be really good at this sort of thing so we should spot many more over the next few years.

    Accretion disks seem to have peaks around 7KeV, so maybe a very specialized x-ray telescope?

    We’ve done this too! The Chandra space telescope has discovered hundreds of thousands of x-ray sources throughout the universe, including many, many black holes. Most of those are supermassive black holes at the centers of other galaxies, but hundreds of “local” objects have been found as well.








  • This has sadly been the norm in the tech industry for at least a decade now. The whole eco-system had become so accustomed to quick injections of investment cash, that products/businesses no longer grow organically but instead hit the scene in one huge developing and marketing blitz.

    Consider companies like Uber or AirBnB. Their goal was never to make a safe, stable, or even legal product. Their goal was always to be first. Command the largest user base possible in the shortest time possible, then worry about all the details later. Both of those products have had disastrous effects on existing businesses and communities while operating in anti-competetive ways and flaunting existing laws, but so what? They’re popular! Tens of millions of people already use them, and by the time government regulation catches up with that they’re doing it’s already too late. What politician would be brave enough to try and ban a company like Uber? What regulator still has enough power to reign in a company the size of AirBnB?

    OpenAI is playing the same game. They don’t care if their product is safe — hell, they don’t even really care if it’s useful, or profitable. They just want to be ubiquitous, because once they achieve that, the rest doesn’t matter.