People store large quantities of data in their electronic devices and transfer some of this data to others, whether for professional or personal reasons. Data compression methods are thus of the utmost importance, as they can boost the efficiency of devices and communications, making users less reliant on cloud data services and external storage devices.
Lossless is the big claim that nobody is fixating on because “AI” discussions only ever run one set of talking points.
I get how semantic understanding would trade performance for file size when doing compression. I don’t get how you can deterministically use it to always get the exact same complete output from a partial input. I’d love to go over the full paper. And even then the maths would probably go way, way over my head.