• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: January 25th, 2024

help-circle
  • uhhhh the nation in question is the US. not a bad idea to consider other things to put money into

    I don’t disagree that the US has been quite destabilized as a financial player on the world stage, but the US still has an insane amount of influence over global trade, and holds a ton of power within its own economy.

    To argue that Bitcoin is more strongly backed than the entire long-standing, heavily globally financially integrated nation is silly, especially considering, comparatively, how relatively few manufacturers of ASIC miners there are for Bitcoin that could theoretically heavily influence the distribution of hashrate over time if compelled, or how most transactions in crypto still require a financial middleman to offload into currencies like USD because businesses simply can’t operate well when transacting with BTC in most circumstances if that also requires holding onto the BTC afterwards.

    holding BTC long term isn’t that risky

    And the original post was comparing short term treasuries to Bitcoin, not long term ones.

    And even then, Bitcoin’s long-term outlook is bleak considering the % of block rewards paid from fees hasn’t substantially increased to make up for the halvings, which if the trend continues, will result in the cost per block cratering over time, leading to heavily slashed overall hashrate protecting the network.


  • I’ll gladly give you a reason. I’m actually happy to articulate my stance on this, considering how much I tend to care about digital rights.

    Services that host files should not be held responsible for what users upload, unless:

    1. The service explicitly caters to illegal content by definition or practice (i.e. the if the website is literally titled uploadyourcsamhere[.]com then it’s safe to assume they deliberately want to host illegal content)
    2. The service has a very easy mechanism to remove illegal content, either when asked, or through simple monitoring systems, but chooses not to do so (catbox does this, and quite quickly too)

    Because holding services responsible creates a whole host of negative effects. Here’s some examples:

    • Someone starts a CDN and some users upload CSAM. The creator of the CDN goes to jail now. Nobody ever wants to create a CDN because of the legal risk, and thus the only providers of CDNs become shady, expensive, anonymously-run services with no compliance mechanisms.
    • You run a site that hosts images, and someone decides they want to harm you. They upload CSAM, then report the site to law enforcement. You go to jail. Anybody in the future who wants to run an image sharing site must now self-censor to try and not upset any human being that could be willing to harm them via their site.
    • A social media site is hosting the posts and content of users. In order to be compliant and not go to jail, they must engage in extremely strict filtering, otherwise even one mistake could land them in jail. All users of the site are prohibited from posting any NSFW or even suggestive content, (including newsworthy media, such as an image of bodies in a warzone) and any violation leads to an instant ban, because any of those things could lead to a chance of actually illegal content being attached.

    This isn’t just my opinion either. Digital rights organizations such as the Electronic Frontier Foundation have talked at length about similar policies before. To quote them:

    “When social media platforms adopt heavy-handed moderation policies, the unintended consequences can be hard to predict. For example, Twitter’s policies on sexual material have resulted in posts on sexual health and condoms being taken down. YouTube’s bans on violent content have resulted in journalism on the Syrian war being pulled from the site. It can be tempting to attempt to “fix” certain attitudes and behaviors online by placing increased restrictions on users’ speech, but in practice, web platforms have had more success at silencing innocent people than at making online communities healthier.”

    Now, to address the rest of your comment, since I don’t just want to focus on the beginning:

    I think you have to actively moderate what is uploaded

    Catbox does, and as previously mentioned, often at a much higher rate than other services, and at a comparable rate to many services that have millions, if not billions of dollars in annual profits that could otherwise be spent on further moderation.

    there has to be swifter and stricter punishment for those that do upload things that are against TOS and/or illegal.

    The problem isn’t necessarily the speed at which people can be reported and punished, but rather that the internet is fundamentally harder to track people on than real life. It’s easy for cops to sit around at a spot they know someone will be physically distributing illegal content at in real life, but digitally, even if you can see the feed of all the information passing through the service, a VPN or Tor connection will anonymize your IP address in a manner that most police departments won’t be able to track, and most three-letter agencies will simply have a relatively low success rate with.

    There’s no good solution to this problem of identifying perpetrators, which is why platforms often focus on moderation over legal enforcement actions against users so frequently. It accomplishes the goal of preventing and removing the content without having to, for example, require every single user of the internet to scan an ID (and also magically prevent people from just stealing other people’s access tokens and impersonating their ID)

    I do agree, however, that we should probably provide larger amounts of funding, training, and resources, to divisions who’s sole goal is to go after online distribution of various illegal content, primarily that which harms children, because it’s certainly still an issue of there being too many reports to go through, even if many of them will still lead to dead ends.

    I hope that explains why making file hosting services liable for user uploaded content probably isn’t the best strategy. I hate to see people with good intentions support ideas that sound good in practice, but in the end just cause more untold harms, and I hope you can understand why I believe this to be the case.


  • My understanding is that for reliable email, you need to host with microsoft or google otherwise you are more likely to get sorted into junk mail.

    That’s technically accurate, but it depends on the context. For example, if you set up DMARC properly and use a brand new custom domain as a personal email, yeah, you’re much more likely to get sent to spam, but not necessarily right away, and as you use that more frequently, or communicate with people using the larger providers like Google or Microsoft, the higher the “reputation” of your domain will get.

    If you want the highest possible level of reliability though, then yeah, Google or Microsoft’s options are likely gonna give you the highest chance right off the bat without any fuss.



  • When running a local node, the most other people could possibly see is that “x IP is running a Monero node”

    When connecting to a remote node, the node can see:

    • Your IP address
    • When you submit a transaction (which could link your IP to your transactions)
    • The last block your wallet synced (which could be used to determine when you usually use/spent monero last)

    It’s also possible for a remote node to feed your wallet a manipulated list of decoys, which can reduce the anonymity of the transaction you submit by allowing the remote node to simply remove the fake decoys to find which isn’t the decoy (you.)


  • I’m convinced this was written by GPT.

    I’m a human being. I know my writing style can often come off weird to some people, but I can assure you I don’t outsource my thinking to a word prediction program to make my points for me.

    We disagree on how good or bad porn is for society and the youth, so the rest doesn’t even matter.

    I haven’t seen any evidence that light or moderate consumption of porn by legal adults produces significant negative consequences for them or society at large, so long as the porn doesn’t involve non-consenting parties, underage individuals, etc. Thus, I don’t think it’s reasonable to heavily monitor and restrict access to every single individual in our society.

    As for kids, research is obviously lacking since it’s somewhat of a touchy subject for researchers to study, but since we know sex ed, conversations between kids & parents, and even the most basic of parental controls and monitoring can prevent the vast majority of the negative effects, and even the whole of the initial consumption while underage, then that’s what I advocate for.

    Until I see evidence to the contrary, that demonstrates larger harms from general consumption trends than the surveillance of the online media consumption of every single citizen, on top of the possible risks to online censorship, while other methods we already know work well still can’t reduce that risk below the possible harms of a monitoring/access control system, then I’m not going to support such a system.


  • You show your ID and a notary enters their credentials to allow you to create an account

    The problem then lies in how whoever (likely the government) can ensure that verified accounts are indeed verified by real people.

    If any notary can create these accounts by just claiming they saw a proper ID/biometrics, then even one malicious notary could make as many “verified” accounts as they want. If they’re then investigated, that would mean there’d be monitoring in place to see who they met with, which would defeat the privacy preservation method of only having them look at it.

    This also doesn’t solve the problem of people reselling stolen accounts, going to multiple notaries and getting each one to individually attest and make multiple accounts to give out or sell, etc.

    with your fingerprint or FaceID Your ID doesn’t get saved. Your biometrics are only saved in the way that your iPhone saves them for a password.

    If your biometrics are stored, then there’s one of two places they could be stored and processed:

    1. On your own device (i.e. you just use your existing fingerprint lock on your phone to secure your account, say, one that’s made via a passkey so as to make fingerprint verification possible)

    This can just be bypassed by the user once they log in with their biometrics, since the credentials are then decrypted and they can just export them raw, or just have them stolen by anyone who accesses their device or installs malware, etc.

    This doesn’t solve the sale, transfer, or multiple creations of accounts.

    1. A hash of your biometrics are stored on a government server, then your device provides the resulting hash of your fingerprint scans to unlock your account to the government server when logging in.

    The scanner that originally creates the hash for your fingerprint must be trusted to not transmit any other data about your fingerprint itself, and could be bypassed by modifying network requests to send fake hashes to the government server during account creation, thus allowing for infinite “verified” accounts to be created and sold.

    This also doesn’t prevent the stealing or transfer of accounts, since you would essentially just be using your hash as a password instead of a different string of text, and then they’d just steal your hash, not a typical password. This also would mean the government would get a log of every time someone used their account, and you could be instantly re-identified the moment you go to the airport and scan your fingerprint at a TSA checkpoint, for example, permanently tying your real identity back to any account you verify with your biometrics in the future.

    The fundamental problem with these systems is that if you have to verify your identity, you must identify yourself somehow. If that requires sending your personal data to someone, it risks your privacy and security going forward. If that doesn’t require sending your personal data, then the system is easily bypassed, and its existence can’t be justified.

    What’s a solution that would be acceptable for you?

    I’ve said it before, and I’ll continue advocating for it going forward:

    • Parental controls and simple parent-controlled monitoring software on young children’s devices
    • Actual straightforward conversations between parents and kids about adult content
    • Sex ed classes.

    We already know these things do the most we can reasonably do to prevent underage viewing of adult content. We don’t need age verification laws, because they either harm privacy or don’t even work, when much simpler, common sense solutions already solve the problem just fine.


  • they then authorize you to create an account

    Authorize you how?

    That would involve someone having the ability to see which accounts where made, when, and how they were authorized, not to mention likely being able to track when they’re used in the future.

    with biometric credentials

    What does this mean? Do you mean you verify your biometric data with the notary to prove it’s you? Your ID should be enough. Do you mean where your biometric data is your password? This doesn’t prove it’s you. If processing is on-device like how phone lock screens work, then a simple piece of software could just extract the raw credentials and allow people to use/sell/transfer those, bypassing the biometrics. If it requires sending your biometric data to the company to log in like a traditional password flow, then all my previous issues with biometric verification online become present.

    There’s still a key difference between this hybrid approach and, like I mentioned previously, buying alcohol by showing your ID to a clerk at a counter, and it’s that the interaction ends there. If you show ID, buy alcohol, then leave, the store doesn’t do anything after that. There’s no system monitoring when or how much you’re drinking, or if you’ve offered some of that drink to someone underage, for example.

    But with something like what you’re proposing, the unfortunate reality is that it has to have some kind of monitoring for it to functionally work, otherwise it becomes trivially bypassed, and thus the interaction can’t end when the person leaves.

    Not to mention the fact that not all platforms people find porn on are actually dedicated porn sites. Many people are first exposed via social media, just like how they’re exposed to much of their other information and general knowledge nowadays. If we want to age gate social media porn consumption as well, we then need to age verify everyone regardless of if they intend to view porn or not, because we can’t ensure it won’t end up on their feed.

    There’s a reason why I’m so strongly against these verification methods, and it’s because they always cause a whole host of privacy and security issues, and don’t even create a strong enough system to prevent unauthorized porn viewing by minors in the first place.


  • Who under the age of 18 will have money to buy these

    Anyone with at least $0.25-$1, and access to any method of digital payments. (Gift Cards for most retailers, PayPal, Cash App, Zelle, prepaid or non-prepaid debit cards, any cryptocurrency, etc)

    and who would be willing to sell them for the pittance teenagers would be willing to spend?

    Primarily bad actors that obtain the credentials any number of ways, then either directly sell them, or sell them indirectly through third-party storefronts that buy from the bad actors in bulk. Believe me, I’ve watched hundreds of kids in Discord servers publicly sharing and using sites on the clearweb where they cashapp in a dollar then buy a stolen set of bank credentials and try withdrawing money back to their Cash App account.

    I’ve monitored so many of these sites, and seen how easy it is for anybody, even teens with limited financial payment options, to buy stolen credentials with infinitely more importance and personal security measures taken to keep them safe than something specifically for accessing an NSFW site.

    Some of these site owners operate for months before eventually shutting down and re-opening separate storefronts for anonymity, and I know of one who was selling stolen SSNs, IDs, Gift Cards, and assorted accounts, and made, by my estimates, at least a million dollars in revenue every month off items that were almost all within the price range of any child or teenager.

    Especially if these get rotated out regularly via a system wide program.

    Rotation can help, but doesn’t cut off these services from operating. They just sell stuff in smaller, more quickly refilled batches instead of buying large batches and reselling them over longer time periods. It can make prices slightly higher, but in the end it doesn’t prevent kids from accessing this content.

    But what it does end up doing is creating perverse incentives.

    It drives people to even less regulated, more harmful porn sites. It leads to the further stealing of credentials and personal information. It creates databases and online footprints that can be used to blackmail people, and it normalizes giving sensitive personal information to random websites online.

    The last thing you want when you’re trying to prevent people from getting scammed is to monetarily encourage scamming people out of their credentials and biometric data, while simultaneously making it easier for people to unknowingly hand over credentials and biometrics by normalizing the process.

    This is something practically every digital rights organization argues against, and for good reason. It’s a generally unsafe system that creates bad incentives and drives people to even more unsafe options.

    The best mechanisms by far to prevent kids from being exposed to harmful material, or at the very least prevent them from experiencing much harm from such material is often proper parental controls and general internet monitoring by those parents, good sex education, and parents actually talking with their kids instead of fostering the us vs them mentality that drives many kids to rebel against these restrictions, even when they are to benefit the kid.

    That’s why news like this is always so upsetting to me. It’s a mom who is understandably upset, but instead of taking accountability for leaving a unsecured laptop with access to the internet easily accessible to her kid while not monitoring it at all, she simply puts the blame on the platforms her child decided to access, even though we know she could have done many things herself to prevent this from happening without risking anybody’s privacy or safety, unlike what age-gating regulations do in practice.


  • The conflict that this often boils down to is that the digital world does not emulate the real world. If you want to buy porn in the real world, you need ID, but online anything goes. I love my online anonymity just as much as everybody else, but we’ll eventually need to find some hybrid approach.

    The problem is that because the internet is fundamentally different from the real world, it has its own challenges that make some of the things we do in the real world unfeasible in the digital world. showing an ID to a clerk at a store doesn’t transmit your sensitive information over the internet to/through an unknown list of companies, who may or may not store it for an undetermined amount of time, but doing so on the internet essentially has to do so.

    While I do think we should try and prevent kids from viewing porn at young ages, a lot of the mechanisms proposed to do so are either not possible, cause many other harms by their existence that could outweigh their benefits, or are trivially bypassed.

    We already scan our faces on our phones all the time, or scan our finger on our computer. How about when you want to access a porn site you have to type in a password or do some biometric credential?

    Those systems are fundamentally different, even though the interaction is the same, so implementing them in places like porn sites carries entirely different implications.

    For example, (and I’m oversimplifying a bit here for time’s sake) a biometric scan on your phone is just comparing the scan it takes each time with the hash (a processed version) of your original biometric scan during setup. If they match, the phone unlocks.

    This verification process does nothing to verify if you’re a given age, just that your face/fingerprint is the same as during setup. It also never has to transmit or store your biometrics to another company. It’s always on-device.

    Age verification online for something like porn is much more complex. When you’re verifying a user, you have to verify:

    • The general location the user lives in (to determine which laws you must comply with, if not for the type of verification, then for the data retention and security, and access)
    • The age of the user
    • The reality of the user (e.g. a camera held up to a YouTube video shouldn’t verify as if the person is the one in the video)
    • The uniqueness of the user (e.g. that this isn’t someone re-licensing the same clip of their face to be replayed directly into the camera feed, allowing any number of people to verify using the same face)
    • And depending on the local regulations, the identity of the user (e.g. name, and sometimes other identifiers like address, email, phone number, SSN, etc)

    This all carries immense challenges. It’s fundamentally incompatible with user privacy. Any step in this process could involve processing data about someone that could allow for:

    • Blackmail/extortion
    • Data breaches that allow access to other services the person has an account on
    • Being added to spam marketing lists
    • Heavily targeted advertising based on sexual preference
    • Government registries that could be used to target opponents

    This also doesn’t include the fact that most of these can simply be bypassed by anyone willing to put in even a little effort. If you can buy an ID or SSN online for less than a dollar, you’ll definitely be able to buy an age verification scan video, or a photo of an ID.

    Plus, for those unwilling to directly bypass measures on the major sites, then if only the sites that actually fear government enforcement implement these measures, then people will simply go to the less regulated sites.

    In fact, this is a well documented trend, that whenever censorship of any media happens, porn or otherwise, viewership simply moves to noncompliant services. And of course, these services can be hosting much worse content than the larger, relatively regulatory-compliant businesses, such as CSAM, gore, nonconsensual recordings, etc.


  • They can prove its signed with the governments root cert, showing that its someone over 18, but not who.

    This is generally a pretty decent system in concept, but it has some unique flaws.

    A similar system is even being developed by Cloudflare (“Privacy Pass”) to make CAPTCHAs more private by allowing you to anonymously redeem “tokens” proving you’ve solved a CAPTCHA recently, without the CAPTCHA provider having to track any data about you across sites.

    They know someone who had solved a captcha recently is redeeming a token, but they don’t know who.

    This type of system will always have one core problem that really can’t be fixed though, which is the sale and transfer of authenticated tokens/keys/whatever they get called in a given implementation.

    Someone could simply take their signed cert, and allow anybody else to use it. If you allow the government to view whoever is using their keys, but not the porn sites, then you give the government a database of every porn user with easily timestamped logs. If you don’t give the government that ability, even one cert being shared defeats the whole system. If you add a rate limit to try and solve the previous problem, you can end up blocking access if a site, browser, or extension, is just slightly misconfigured in how it handles requesting the cert, or could break someone’s ability to use their cert the moment it gets leaked.

    And even if someone isn’t voluntarily offering up their cert, it will simply get sold. I’ve investigated sites selling IDs and SSNs for less than a dollar a piece before, and I doubt something even less consequential like an ID just for accessing online adult content would even sell for that much.

    I’ve seen other methods before, such as “anonymous” scans of your face where processing is done locally to prove you’re an adult, then the result of the cryptographic challenge is sent back proving you’re over 18, but that would fail anyone who looks younger but is still an adult, can be bypassed by the aforementioned sale of personal data to people wanting to verify, and is often easily fooled by videos and photos of people on YouTube, for example.


  • There’s absolutely something to be said for trying to ensure that people don’t have access to porn as kids, but that doesn’t come from what these legal battles inevitably want to impose, which is ID check requirements that create a massive treasure trove of data for attackers to target to steal IDs, blackmail individuals, and violate people’s privacy, while adding additional costs for porn sites that will inevitably lead to predatory monetization, such as more predatory ads.

    The problem is that parents are offloading their own responsibility and education off themselves and schools, and instead placing an unworkable burden onto the sites that host and distribute pornographic content.

    We know that when you provide proper sex education, talk to kids about how to safely consume adult content without risking their health, safety, and while setting realistic expectations, you tend to get much better outcomes.

    If there’s one thing I think most people are very aware of, it’s that the more you try and hide something from kids, the more they tend to try and resist that, and find it anyways, except without any proper education or safeguards.

    It’s why abstinence only education tends to lead to worse outcomes than sex education, even though on the surface, you’re “exposing” kids to sexually related materials.

    This doesn’t mean we should deliberately expose kids to porn out of nowhere, remove all restrictions or age checks, etc, but it does mean that we can, for example:

    • Implement reasonable sex education in schools. Kids who have sex ed generally engage in healthier masturbation and sex than kids who don’t.
    • Have parents talk with their kids about safe and healthy sex & relationships. It’s an awkward conversation, but we know it keeps kids healthier and safer in the long run.
    • Implement a captcha-like system to make it a little more difficult (and primarily, slower and less stimulating) for kids to quickly access porn sites. Requiring certain somewhat higher level math problems to be solved, for example. This doesn’t rely on giving up sensitive personal info.

    Kids won’t simply stop viewing porn if you implement age gates. Kids are smart, they find their way around restrictions all the time. If we can’t reasonably stop them without producing a whole host of other extremely negative consequences, then the best thing we can do is educate them on how to not severely risk their own health.

    It’s not perfect, but it’s better than creating massive pools of private data, perverse financial incentives, and pushing people to more fringe sites that do even less to comply with the law.


  • While true, it doesn’t keep you safe from sleeper agent attacks.

    These can essentially allow the creator of your model to inject (seamlessly, undetectably until the desired response is triggered) behaviors into a model that will only trigger when given a specific prompt, or when a certain condition is met. (such as a date in time having passed)

    https://arxiv.org/pdf/2401.05566

    It’s obviously not as likely as a company simply tweaking their models when they feel like it, and it prevents them from changing anything on the fly after the training is complete and the model is distributed, (although I could see a model designed to pull from the internet being given a vulnerability where it queries a specific URL on the company’s servers that can then be updated with any given additional payload) but I personally think we’ll see vulnerabilities like this become evident over time, as I have no doubts it will become a target, especially for nation state actors, to simply slip some faulty data into training datasets or fine-tuning processes that get picked up by many models.


  • The entire reason I stopped using them was because they agreed to share more user data with Google and Microsoft in return for being allowed to keep using their search results. If they had an independent index without those kinds of tracking for big tech companies, I’d switch back in a heartbeat.