

You can use one of a few ways to use the TPM to auto decrypt on boot without passphrase. Systemd cryptenroll is my favorite.
You can use one of a few ways to use the TPM to auto decrypt on boot without passphrase. Systemd cryptenroll is my favorite.
Because it says to do so?
Proxmox uses Debian as the OS and for several scenarios it says do Debian to get that done and just add the proxmox software. It’s managing qemu kvm on a deb managed kernel
I’d say that those details that vary tend not to vary within a language and ecosystem, so a fairly dumb correlative relationship is enough to generally be fine. There’s no way to use logic to infer that it’s obvious that in language X you need to do mylist.join(string) but in language Y you need to do string.join(mylist), but it’s super easy to recognize tokens that suggest those things and a correlation to the vocabulary that matches the context.
Rinse and repeat for things like do I need to specify type and what is the vocabulary for the best type for a numeric value, This variable that makes sense is missing a declaration, does this look to actually be a new distinct variable or just a typo of one that was declared.
But again, I’m thinking mostly in what kind of sort of can work, my experience personally is that it’s wrong so often as to be annoying and get in the way of more traditional completion behaviors that play it safe, though with less help particularly for languages like python or javascript.
Fine, a chess engine that is capable of running with affordable even for the time 1970s electronics will best what marketing folks would have you think is an arbitrarily capable “reasoning” model running on top of the line 2025 hardware.
You can split hairs about “well actually, the 2600 is hardware and a chess engine is the software” but everyone gets the point.
As to assertions that no one should expect an LLM to be a chess engine, well tell that to the industry that is asserting the LLMs are now “reasoning” and provides a basis to replace most of the labor pool. We need stories like this to calibrate expectations in a way common people can understand…
Oh man, I feel this. A couple of times I’ve had to field questions about some REST API I support and they ask why they get errors when they supply a specific attribute. Now that attribute never existed, not in our code, not in our documentation, we never thought of it. So I say “Well, that attribute is invalid, I’m not sure where you saw to do that”. They get insistent that the code is generated by a very good LLM, so we must be missing something…
To be fair, a decent chunk of coding is stupid boilerplate/minutia that varies environment to environment, language to language, library to library.
So LLM can do some code completion, filling out a bunch of boilerplate that is blatantly obvious, generating the redundant text mandated by certain patterns, and keeping straight details between languages like “does this language want join as a method on a list with a string argument, or vice versa?”
Problem is this can be sometimes more annoying than it’s worth, as miscompletions are annoying.
GPTs which claim to use a stockfish API
Then the actual chess isn’t LLM. If you are going stockfish, then the LLM doesn’t add anything, stockfish is doing everything.
The whole point is the marketing rage is that LLMs can do all kinds of stuff, doubling down on this with the branding of some approaches as “reasoning” models, which are roughly “similar to ‘pre-reasoning’, but forcing use of more tokens on disposable intermediate generation steps”. With this facet of LLM marketing, the promise would be that the LLM can “reason” itself through a chess game without particular enablement. In practice, people trying to feed in gobs of chess data to an LLM end up with an LLM that doesn’t even comply to the rules of the game, let alone provide reasonable competitive responses to an oppone.
Without being explicit with well researched material, then the marketing presentation gets to stand largely unopposed.
So this is good even if most experts in the field consider it an obvious result.
Particularly to counter some more baseless marketing assertions about the nature of the technology.
And that’s pretty damn useful, but obnoxious to have expectations wildly set incorrectly.
incorrect behavior that doesn’t even have the courtesy to throw an actual error.
To be fair, this can be said of C. A C executable only really forces a crash out when you royally screw up beyond the bounds of your memory. Otherwise functions just return a negative value and calling code that never bothers to check just keep on going.
Golang is similar, slightly mitigated that if you are assigning any return value from a function, you must also explicitly receive an error and you know full well that you are being lazy if you don’t handle it. Well unless you use a panic/recover scheme but golang community will skewer you alive for casually suggesting that and certainly third party libraries aren’t going to do it that way.
Could I write a compiler in C that does this check on a piece of Rust code?
Well yes, but that code has to be written in Rust. The human has to follow rules to give the compiler a chance to check things.
C is so simplictic, that if I can write a piece of functionality in C, I must understand its inner workings fully. Not just how to use the feature, but how the feature works under the hood.
I don’t think that’s particularly more true of C than Rust or even Golang. In C you are frequently making function calls anyway for the real fun stuff. If you ever compile a “simplistic” chunk of C code that you think is obvious how it would compile to assembly and you open up the assembly output, you are likely to be very surprised with what the compiler chose to do. I’ve seen some professional C developers that never actually had a reason to fully understand how the stack works, since C abstracts that away and the implications of the stack don’t matter until you exceed some limitations.
Technically any language runtime can end in a segmentation fault.
For some languages, in principle this shouldn’t be possible, but the runtimes can have bugs and/or you are calling libraries that do some native code at some point.
There are now headlights that can be “high” but block out portions of the beam directed at light sources like oncoming headlights. Can’t have them in the US though.
Unfortunately, the ecosystem around github has evolved so that most folks centralize their testing and deployment code into being executed on github infrastructure. Frankly a perversion of the decentralized design of git.
Fortunately for my team, it doesn’t matter because our process requires stuff that can’t be done from github infrastructure anyway, so we have kept the automatic testing and deployment on premise even as github is the ‘canonical’ place for the code to live.
Always_has_been.jpg
Notably, Phenylephrine was approved in pill form for decongestion, and is all over the place… but doesn’t do a damn thing. Trying to keep pseudoephedrine limited in the market to try to fight meth.
From what I call, the advocates kept saying:
Of course, no one ever explained why I would want to pay full price for a game and also have to pay a monthly fee to access it once purchased, which was the most mind boggling facet of Google’s concept to me, even more boggling than trying to make games render server side when the cheapest end user device can just locally render PS3, maybe PS4 level graphics nowadays.
I remember some people very vehemently telling me that I was dumb to be skeptical of Stadia, that it really was going to just take over the industry…
More like the AI rationalized collapse of the industry.
The cuts largely have nothing at all to do with AI, but it makes for a very good narrative to spin at investors.
One thing left unclear is how the determination is made about emergency versus non emergency.
If it’s a separate number, ok, seems clear cut enough.
If it’s human always answers and if it’s some bullshit they just click a button to punt to AI instead of just hanging up, ok.
If they are saying the AI answers and does the triage and hands off immediately to a human when “emergency detected”, then I could see how that promise could fail.