Yes.
Just do yay timeshift
, then install all three packages that show up.
Timeshift itself is one, autosnap is a second, and the third is a systemd timer that handles the scheduled snaps (monthly, weekly, etc).
Eskating cyclist, gamer and enjoyer of anime. Probably an artist. Also I code sometimes, pretty much just to mod titanfall 2 tho.
Introverted, yet I enjoy discussion to a fault.
Yes.
Just do yay timeshift
, then install all three packages that show up.
Timeshift itself is one, autosnap is a second, and the third is a systemd timer that handles the scheduled snaps (monthly, weekly, etc).
Oh, for sure. If you wait a month, the bigger update can be a lot more trouble.
But look at it like this. If a rolling distro has a problem once a week, which is fixed within 24 hours, updating daily guarantees you will run into it.
While updating weekly means your chance is only one in seven. Since because by the time you update, the fix is more likely to already be in the repos, so you’ll be jumping over the problematic update.
The functionality is conceptually identical, yes.
And timeshift is by default set up such that only / is rolled back while /home is kept as-is.
So same as atomic distros, rolling back doesn’t mean going back in time in terms of personal files or settings.
So I’m really only missing out on the updates for something like Bazzite being potentially more reliable.
I’ve been on endeavour+plasma over a year now.
I share your desire for a system that always, 100%, every time, is there and ready to be used.
At the same time, I really like arch and the convenience of the AUR.
Hence, I boot-strap reliability onto my system through btrfs snapshots.
The setup is extremely simple, (provided your install is grub+btrfs) just install timeshift + the auto-snap systemd services. Configure it, and forget it.
Next time something breaks, instead of spending time on troubleshooting, you timeshift back to a known good point and then just get on with using your system.
With the auto-snap package installed every update also creates a restore point to go back to before it.
In addition to that, I started updating my system less frequency. The logic being that the more often you update a rolling release install, the more likely you are to catch it at a time when something is wrong, before it is fixed. Still regularly, but instead of every other day, I now have an update notification that goes off once a week.
The result has been zero time spent troubleshooting my system. If it worked yesterday, it’ll work today. If it worked last week, but doesn’t today, I’m a reboot away from a known good snapshot.
Wayland has been fine on nvidia for a while now.
Uuh. That is exactly how games work.
And that’s completely normal. Every modern game has multiple versions of the same asset at various detail levels, all of which are used. And when you choose between “low, medium, high” that doesn’t mean there’s a giant pile of assets that go un-used. The game will use them all, rendering a different version of an asset depending on how close to something you are. The settings often just change how far away the game will render at the highest quality, before it starts to drop down to the lower LODs (level of detail).
That’s why the games aren’t much smaller on console, for exanple. They’re not including all the unnecessary assets for different graphics settings from PC. They are all part of how modern game work.
“Handling that in the code” would still involve storing it all somewhere after “generation”, same way shaders are better generated in advance, lest you get a stuttery mess.
And it isn’t how most game do things even today. Such code does not exist. Not yet at least. Human artists produce better results, and hence games ship with every version of every asset.
Finally automating this is what Unreals nanite system has only recently promised to do, but it has run into snags.
CSAM is against their terms of use. Afaik they remove it both using some automated systems, as well as manually.
Games can’t really compress their assets much.
Stuff like textures generally use a lossless bitmap format. The compression artefacts you get with lossy formats, while unnoticable to the human eye, can cause much more visible rendering artefacts once the game engine goes to calculate how light should interact with the material.
That’s not to say devs couldn’t be more efficient, but it does explain why games don’t really compress that well.
Aren’t a lot of the 2.5" ones already empty space?
How big, and how expensive, would a 3.5" SSD be, if it actually filled enough of the space with NAND chips for the form factor to be warranted?
Those get taken down on a regular basis. Not to mention the atrocious bitrates that is all they can manage.
Meanwhile, a high quality BluRay rip on my drive ain’t going anywhere.
You can’t seed properly.
“Why store anything? Just re-download it from someone who’s still storing it!”
You see the catch 22 here?
There should be a library type called “Home videos and photos” for that.
Huh? Like just sitting there?
Or is it running a heavy background task like trickplay generation? You can disable trickplay (scrobbling previews) if your system isn’t beefy enough to keep up with them.
I run video game servers on my system, and while stream transcodes used to interfere with them, even that was fixed my assigning JF and the games to run on separate CPU cores.
Download mode is definitely not a thing yet.
One of the things that’s a bother with the deck is having to leave it on and probably on the charger, if you want to install a couple hundred gigs of games.
Once that’s done, updates don’t take long on my connection so they haven’t been a problem.
But even better if they can pull off a mode where it can do updates while near-sleeping.
What do you mean “but”?
This doesn’t produce anything. It removes jobs instead of creating them. And by the end there is one less company in the system.
I wrote in response to you saying this is what they “should” be doing. That it would either work, or not.
But this is working sustainable businesses being butchered for their value on the meat market, rather than operated long term.
It most certainly isn’t what they “should” be doing.
If this is the best way to make money, the rich will continue to do it instead of starting new companies. That is not going to have pleasant long-term effects on the world.
Then you need to look into how private equity works.
They buy mature companies, often with borrowed capital, and then place the debt on the purchased company. They essentially make companies take on a massive loan to buy themselves from themselves, except the private equity firm ends up the owner.
The company then goes into overdrive trying to pay off the debt, while the firm makes changes intended to make the company “more efficient”. All while paying themselves “consulting fees” and “bonuses” for stepping in and “helping” the company do better.
This usually means mass layoffs, dumping assets, paycuts, restructuring…
Best case scenario, the company was already failing, and now it fails faster.
Worst case… The company was doing perfectly fine, making a sustainable living for its employees. And then it gets purchased by a private equity firm.
Suddenly everything is on fire. Not a single penny can go unpiched, workplace comfort unsacrificed, or employee unoverworked. And that that is the new norm, is the good ending.
Private equity makes money by killing the golden goose, and then finding another. And then another. And then another.
Buns really are. Expert burrowers, as flexible as cats, and they can lift and get under a lot more than you think.
What the money in our pocket allows us to afford is not the same as what our health and planet can afford.
Causing that disconnect by supplanting secular consideration with economic consideration is one the reasons so much evil is done and accepted in the name of profit.
Yeah… I’m not gonna be asking the stuff I already found answers to via an internet search.