

You can’t trust yourself to be impartial. That’s why scientific method and various approaches (even like dialectics god forbid) were invented to leading a discussion.
You can’t trust yourself to be impartial. That’s why scientific method and various approaches (even like dialectics god forbid) were invented to leading a discussion.
It doesn’t work like that. Which is why con artists and propaganda function often by “offering advice”.
I’ve dreamed of such a cult at some point, like Chinese fortune cookies, but those are less exploitable.
Obesity is a result of many things. It’s mostly hereditary whatever you are born with, so the comment you are answering is right.
Just like other cases of “unhealthy”. Just like with them, though, it’s possible to find one’s optimum in food and sports and sleep to feel better, and a person who feels better is usually more attractive.
But looking like models you can find in the interwebs is just not possible if you are not already like them.
I’ve met people far more beautiful than most you can see in mainstream media, let alone adult materials.
(And in case one bitch with ASPD is stalking me, no, I don’t mean her ; that is, this is true for her too.)
OK, sorry.
I disagree. It just won’t be fancy. It has to be an enormous project with existential risks. And you have to really send many people at once with no return ticket. “At once” is important, you can’t ramp it up, that’s far more expensive. It has to be a mission very deeply planned in detail with plenty of failsafe paths, aimed at building a colony that can be maintained with Earth’s teaching resources, technologies and expertise, and locally produced and processed materials for everything. So - something like that won’t happen anytime soon, but at some point it will happen.
The technologies necessary have to be perfected first, computing should stop being the main tool for hype, and the societies should adapt culturally for computing and worldwide connectivity.
These take centuries. In those centuries we’ll be busy with plenty of things existential, like avoiding the planet turning into one big 70s Cambodia.
Violence is influence too. You still need to calculate its effects.
Too bad
In practice my comment means that it’s far too early to think of space colonization.
#1 is like tactical nuke tech available for all civilians, #2 would make sense if all the production line and consumers are in space too, #3 would make sense as part of the same.
Earth gravity well is a bitch. We live in it. Sending stuff up is expensive, sending stuff down is stupid when it’s needed up there, but without some critical complete piece of civilization to send up at once, you’ll have to send stuff up all the time.
It’s too expensive and the profits are transcendent, as in “ideological achievement and because we can”. Also they may eventually start sending nukes down.
Thus it all makes sense only when we can build and equip an autonomous colony to send at once. Self-reliant with the condition that they will get needed materials from wherever they are sent.
I suggest something with gravity though. Europa or Ganymede or Enceladus. Something like that.
Quantum was popular as “oh god, our cryptography will die, what are we going to do”. Now post-quantum cryptography exists and it doesn’t seem to be clear what else quantum computers are useful for, other than PR.
Blockchain was popular when the supply of cryptocurrencies was kinda small, now there’s too many of them. And also its actually useful applications require having offline power to make decisions. Go on, tell politicians in any country that you want electoral system exposed and blockchain-based to avoid falsifications. LOL. They are not stupid. If you have a safe electoral system, you can do with much more direct democracy. Except blockchain seems a bit of an overkill for it.
3D printing is still kinda cool, except it’s just one tool among others. It’s widely used to prototype combat drones and their ammunition. The future is here, you just don’t see it.
Cloud - well, bandwidths allowed for it and it’s good for companies, so they advertised it. Except even in the richest countries Internet connectivity is not a given, and at some point wow-effect is defeated by convenience. It’s just less convenient to use cloud stuff, except for things which don’t make sense without cloud stuff. Like temporary collaboration on a shared document.
“AI” - they’ve ran out of stupid things to do with computers, so they are now promising the ultimate stupid thing. They don’t want smart things, smart things are smart because they change the world, killing monopolies and oligopolies along the way.
Removed by mod
Threats work well for scams. People who couldn’t be bothered to move by promises of something new and better can be motivated by fear of losing what they already have.
It’s really unfortunate psychology is looked down upon and psychologists are viewed as some “soft” profession. Zuck is a psychology major. It’s been 2 decades, most of the radical changes in which were not radical in anything other than approach to human psychology.
BTW, I’ve learned recently that in their few initial years Khmer Rouge were not known as communist organization to even many of their members. Just an “organization”. Their rhetoric was agrarian (of course peasants are hard-working virtuous people, and from peasantry working the earth comes all the wisdom, and those corrupt and immoral people in the cities should be made work to eat), Buddhist (of course the monk-feudal system of obedience, work and ascese is the virtuous way to live, though of course we are having a rebirth now so we are even wiser), monarchist (they referred to Sihanouk’s authority almost to the end), anti-Vietnamese (that’s like Jewish for German Nazis, Vietnamese are the evil). And after them taking power for some time they still didn’t communicate anything communist. They didn’t even introduce their leadership. Nobody knew who makes the decisions in that “organization” or how it was structured. It didn’t have a face. They only officially made themselves visible as Democratic Kampuchea with communism and actual leaders when the Chinese pressured them. They didn’t need to, because they were obeyed via threat (and lots of fulfillment) of violence anyway.
This is important in the sense that when you have the power, you don’t need to officially tell the people over which you have it that you rule them.
So - in these 2 decades it has also came into fashion to deliberately stubbornly ignore the fact that psychology works over masses. And everybody acts as if when there’s no technical means to make people do something, then it’s not likely or possible.
If each unplanned death not result of operator’s mistake would lead to confiscation of one month’s profit (not margin), then I’d think it would help very much.
Don’t get me wrong, AI has its uses, but their whole “solution for everything” mentality
They are trying to somehow undo or redo personal computers.
To create a non-transparent tool that replaces the need (and thus social possibility) to have a universal machine.
The difference between thinking robots and computers as we have them is that thinking robots take some place in the social hierarchy, and computers help everyone who has a computer and uses it.
Science fiction usually portrayed artificial humans, not computers, before actually, ahem, seeing the world as it turned out.
It’s sort of a social power revolt against intellectual power (well, some kind of it).
Like a tantrum. People who don’t like how it really happened still want their deus ex machina, an obedient slave at that, that can take responsibility at that. Their 50 years long shock has receded and they now think they are about to turn this defeat into victory.
only making it bigger and last longer which will only make it worse when it does actually pop
I think that’s deliberate. There are a few companies which will feel very well when the bubble pops, having the actual audience as their main capital, while their capitalization and technologies are secondary. The rest are just blindly led by short-term profits.
This is, I think, true. Would be pretty traditional for empires, to test everything new in colonies first, then bring it back. From weapons to beer to laws.
Why do all these idiots behave as if they knew where the future is?
If it’s about all the achievements they’ve read about and seen in games like Civilization, real-life doesn’t quite look like that. Though in some sense these games, though good, have kinda simplified and made degenerate the understanding of the progress by many people. Similarly to what Soviet school program did, but in a more persuasive and pleasant way.
There’s no tech tree. There’s been plenty of attempts at any breakthrough before it actually happened. Suppose this “AI” is to some real future AGI what Leonardo’s machines were to Wright brothers’ machines, even in that case there’s no hurry to embrace it.
If he thinks he’s looking at a 90% achieved tech tree point with powerful perks, then his profession should probably be that of a janitor. Same day schedule, same places to mop up, you know.
Yes, I meant in case you have a library of FLACs. In that case it wouldn’t be too problematic cause, well, it’s just a script recursing your library, encoding from FLAC to Opus and if succeeded, removing FLAC files.
Your own opinions are a result of much bigger amount of much more relevant data in any case.
An AI model is a set of coefficients averaging a dataset by “one size fits all” measure. Those coefficients are found by an expensive process using criteria (again “one size fits all”) set by a company making it. From them its machine generates (looks up actually) the most probable text, it’s like a music box. A beautiful toy.
So you have different motivations and abstract ideas in different situations, you also have something like a shared codebook with other people making decisions - your instincts and associations. Reading what they say or seeing what they do, you get a mirror model in you head, it might be worse, but it’s something very hard for text analysis to approach.
That model doesn’t, it has the same average line for all situations, and also it can’t determine (on the level described) that it doesn’t know something. To determine that you don’t know something you need an abstract model, not a language model.
I dunno what is their current state, all I’ve read and kinda understood was seemingly about optimization of computation for language models and structuring their application to imitate a syllogism system.
I think with the current approaches making a system of translating language to a certain abstract model (tokenization isn’t even close to that, you need to have some topology with areas that can be easily merged or split instead of token points with distances, in any case) and abstract entities to language would be very computationally expensive.