How can you improve your conception of rationality? Not by saying to yourself, “It is my duty to be rational.” By this you only enshrine your mistaken conception. Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, “The sky is green,” and you look up at the sky and see blue. If you think: “It may look like the sky is blue, but rationality is to believe the words of the Great Teacher,” you lose a chance to discover your mistake.
http://yudkowsky.net/rational/virtueshttp://wiki.lesswrong.com/wiki/FAQhttp://wiki.lesswrong.com/wiki/Sequences
A word fails to connect to reality in the first place. Is Socrates a framster? Yes or no? (The Parable of the Dagger.)
You talk about categories as if they are manna fallen from the Platonic Realm, rather than inferences implemented in a real brain. The ancient philosophers said "Socrates is a man", not, "My brain perceptually classifies Socrates as a match against the 'human' concept". (How An Algorithm Feels From Inside.)
i don't even know what's going on with these people. apparently there are many of them at g00gle.
― Chuck E was a hero to most (s.clover), Friday, 5 April 2013 17:15 (twelve years ago)
and then they hired this guy:
http://www.kurzweilai.net/singularity-q-a
Intelligent nanorobots will be deeply integrated in our bodies, our brains, and our environment, overcoming pollution and poverty, providing vastly extended longevity, full-immersion virtual reality incorporating all of the senses (like The Matrix), “experience beaming” (like “Being John Malkovich”), and vastly enhanced human intelligence. The result will be an intimate merger between the technology-creating species and the technological evolutionary process it spawned.
― Chuck E was a hero to most (s.clover), Friday, 5 April 2013 17:19 (twelve years ago)
In particular, if you want to do any of the following, consider doing lots of homework and ensure you're not making any standard mistakes:* claim your god exists* argue for a universally compelling morality* claim you have an easy way to make superintelligent AI safe
* claim your god exists* argue for a universally compelling morality* claim you have an easy way to make superintelligent AI safe
― Chuck E was a hero to most (s.clover), Friday, 5 April 2013 17:21 (twelve years ago)
interesting, i'll have to check out these links when i get home. i'm enjoying how things are turning out these days, everything's looking more and more like some techno-dystopia novel
― Spectrum, Friday, 5 April 2013 17:21 (twelve years ago)
Yudkowsky has also written several works[18] of science fiction and other fiction. His Harry Potter fan fiction story Harry Potter and the Methods of Rationality illustrates topics in cognitive science and rationality (The New Yorker described it as "a thousand-page online 'fanfic' text called 'Harry Potter and the Methods of Rationality', which recasts the original story in an attempt to explain Harry's wizardry through the scientific method"[19])
― Chuck E was a hero to most (s.clover), Friday, 5 April 2013 17:22 (twelve years ago)
i want holobeer
― ciderpress, Friday, 5 April 2013 17:23 (twelve years ago)
i have some friends who were into that harry potter thing, had no idea that's what it was about but makes more sense now
robin hanson at www.overcomingbias.com is like their 'respectable' academic figure, he thinks prediction markets can solve ever human problem. also peter thiel gives these people money.
― iatee, Friday, 5 April 2013 17:26 (twelve years ago)
has anyone met these people irl? what are they like? they're speaking like some strange sci-fi language like aliens-of-the-week on star trek voyager or something. do they wear star trek shirts?
― Chuck E was a hero to most (s.clover), Friday, 5 April 2013 17:30 (twelve years ago)
I'm friends w/one of these guys, he is nice, very socially awkward, believes that the singularity is just round the corner
― c21m50nm3x1c4n (wins), Friday, 5 April 2013 17:41 (twelve years ago)
he explained what it was all about to me once, it sounded cuckoo but I believe his heart is in the right place
― c21m50nm3x1c4n (wins), Friday, 5 April 2013 17:43 (twelve years ago)
Logic and rationality are just a nice set of tools among many. Used alone they cannot supply you with a reason for doing anything until you have a set of arbitrary axioms defining what is good. This is nicely illustrated by the childish game of replying to whatever someone says by asking "why?"
imo people who worship rationality as if it were some infallible god are disgusting savages.
― Aimless, Friday, 5 April 2013 17:47 (twelve years ago)
Aimless, I think you are only half-right, if I may say so.
I believe our shared reality can be divided thusly:
0. Ontology i. Objectivity i. Subjectivity
0. Epistemology i. Objectivity i. Subjectivity
Where everything that is '0' is on a same level (not hierarchical) and 'i' falls into these categories while still being on the same level with the rest of the i's.
The difficult arises when trying to define what is ontologically subjective or objective and epistemologically subjective or objective.
Not everything has a truth value. Just as we cannot speak in terms of 'good' and/or 'bad' about everything.
― c21m50nh3x460n, Friday, 5 April 2013 17:54 (twelve years ago)
Why would anyone try to define what is ontologically subjective or objective and epistemologically subjective or objective?
― Aimless, Friday, 5 April 2013 18:01 (twelve years ago)
Are you being genuine or "playing a childish game"?
― c21m50nh3x460n, Friday, 5 April 2013 18:03 (twelve years ago)
well, this is taking a turn.
― Chuck E was a hero to most (s.clover), Friday, 5 April 2013 18:03 (twelve years ago)
oh lord, let's just stick to how freaky these guys are
― Spectrum, Friday, 5 April 2013 18:03 (twelve years ago)
So while beliefs about the best sport or music may vary by culture, for the purpose of picking good mates or allies you can’t go too wrong by being impressed by whomever impresses folks from other cultures, and you have incentives not to make mistakes. For example, if you are mistakenly impressed by and mate with someone without real sport or music abilities, you who may end up with kids who lack those abilities, and fail to impress the next generation.
bet these guys all love the ladder theory
― Chuck E was a hero to most (s.clover), Friday, 5 April 2013 18:05 (twelve years ago)
bet in high school they all wore geordi glasses and tried to talk like data to their friends.
― Chuck E was a hero to most (s.clover), Friday, 5 April 2013 18:06 (twelve years ago)
P.S. You don't really have to answer that. My only point is that the answer to this question must eventually rest upon a motivation that may be described rationally, but cannot be derived rationally.
― Aimless, Friday, 5 April 2013 18:06 (twelve years ago)
they also really like bayesian inference
― iatee, Friday, 5 April 2013 18:07 (twelve years ago)
The eighth virtue is humility. To be humble is to take specific actions in anticipation of your own errors.
― the late great, Friday, 5 April 2013 18:30 (twelve years ago)
that's not a good definition of humility imo
― the late great, Friday, 5 April 2013 18:31 (twelve years ago)
Aimless, does it really matter that a motive can only be described rationally while no motive may be derived from things rationally?
It's all nice as a theoretical and intellectual exercise, but I question its practicality and real-world application, with all due respect.
― c21m50nh3x460n, Friday, 5 April 2013 18:34 (twelve years ago)
xp
Taking specific actions in anticipation of your own errors is certainly an act requiring a measure of humility. Perhaps this is the only act of humility he has any familiarity with. This is a bit like saying "a bear is a large, brown, powerful, furry creature that lives in a den several miles from my house".
does it really matter that a motive can only be described rationally while no motive may be derived from things rationally?
It only matters if you would like to understand the limitations of rationality as a tool and its proper sphere of functionality. Motives may seem to be a mere rump to most of our mental activity, especially if you value rationality above all else. After all, motives supply themselves in profusion whether you think much about them or not. I would submit that this peculiar fact requires the most careful and patient observation, and understanding the source of motives is far from being a mere theoretical and intellectual exercise.
― Aimless, Friday, 5 April 2013 18:48 (twelve years ago)
― iatee, Friday, 5 April 2013 14:07 (1 hour ago) Bookmark Flag Post Permalink
haha this makes perfect sense
― flopson, Friday, 5 April 2013 19:54 (twelve years ago)
Exhibit A that these folk are pure nutjobs
http://lesswrong.com/lw/kn/torture_vs_dust_specks/
― riverrun, past Steve and Adam's (ledge), Friday, 5 April 2013 22:27 (twelve years ago)
Suppose I got up one morning, and took out two earplugs, and set them down next to two other earplugs on my nighttable, and noticed that there were now three earplugs, without any earplugs having appeared or disappeared—in contrast to my stored memory that 2 + 2 was supposed to equal 4. Moreover, when I visualized the process in my own mind, it seemed that making XX and XX come out to XXXX required an extra X to appear from nowhere, and was, moreover, inconsistent with other arithmetic I visualized, since subtracting XX from XXX left XX, but subtracting XX from XXXX left XXX. This would conflict with my stored memory that 3 - 2 = 1, but memory would be absurd in the face of physical and mental confirmation that XXX - XX = XX.
― Chuck E was a hero to most (s.clover), Saturday, 6 April 2013 15:39 (twelve years ago)
fyi they are among us. there's already an active hidden rationalism AI cultist creep thread on ilx.
― Mordy, Saturday, 6 April 2013 15:43 (twelve years ago)
https://www.youtube.com/watch?v=iFjd9IQfjZg
― What About The Half That's Never Been POLLed (James Redd and the Blecchs), Saturday, 6 April 2013 17:43 (twelve years ago)
10 years ago i was reading about yudkowsky when he was "making" an a.i. from what i understand it's a somewhat murky field so self-made men can sort of rope people into their projects. lots of those "cultists" are into a.i. , i forgot most of their names but there was groetzel http://wp.novamente.net/ , i think a guy from http://www.cyc.com/faq-page#n496 etc
― Sébastien, Saturday, 6 April 2013 18:02 (twelve years ago)
some of these guys managed to burn some millions on their projects so it was sort of exciting; i was reading that stuff as cool sf with the option of some results. it's been 10 years i haven't really heard of them , so...
― Sébastien, Saturday, 6 April 2013 18:07 (twelve years ago)
http://www.the-rudy.com/images/iggy-pop_rational-ht2.jpg
― What About The Half That's Never Been POLLed (James Redd and the Blecchs), Saturday, 6 April 2013 18:13 (twelve years ago)
IBM threw hueg resources into Deep Blue (chess) and Watson (Jeopardy) and came away with a ton of great publicity and some technical expertise it could generalize elsewhere, but you've probably noticed that IBM is not yet selling a version of HAL. AI enthusiasts without an NSA, IBM, Google, or Apple paying the freight are notorious overreachers.
― Aimless, Saturday, 6 April 2013 18:17 (twelve years ago)
otm, more or less, although after the continued black eyes AI received many people seemed to drop down into subfields like machine learning and data mining which allowed them to focus on the technical tasks at hand and to avoid using the freighted term "AI" too much. So on the one hand technical successes of AI may live on under different names, on the other true believers of the most grandiose philosophical claims still fly the flannel and ask DO U SEE?
Had not known that James Lighthill was one of the first big critics.
― What About The Half That's Never Been POLLed (James Redd and the Blecchs), Saturday, 6 April 2013 19:12 (twelve years ago)
"but you've probably noticed that IBM is not yet selling a version of HAL"
i thought they were, except HAL is handling customer service phone trees instead of running space stations.
― Philip Nunez, Saturday, 6 April 2013 19:35 (twelve years ago)
Which is, um, not quite as hard?
― What About The Half That's Never Been POLLed (James Redd and the Blecchs), Saturday, 6 April 2013 19:37 (twelve years ago)
I'll say. HAL really fucked up that space gig.
― Philip Nunez, Saturday, 6 April 2013 19:49 (twelve years ago)
1) HAL was doing fine until the unpredictability of his super-human intelligence made him psychotic2) HAL is a fictional construct3) Please provide the 1950s era paper in which someone, preferably Alan Turing, states that if in 50 years we have created a machine that can traverse a tree of extremely limited depth and width using a clearly synthetic or prerecorded voice then we can congratulate ourselves for having built something rivaling the human brain itself.
― What About The Half That's Never Been POLLed (James Redd and the Blecchs), Saturday, 6 April 2013 20:22 (twelve years ago)
Placing the fictional HAL beside the equally fictional construct of the "singularity", HAL seems to be the more probable.
― Aimless, Saturday, 6 April 2013 20:47 (twelve years ago)
In the movie at least, they trade on the creepiness of HAL's anthropomorphomormomorphization but he's ultimately rendered as just another tool gone on the fritz (complete with bowman as frustrated sys-admin; bowman also demurs when the reporter asks if HAL has a soul), so to the extent that we have things today like apple maps giving terrifyingly bad directions, we have definitely delivered on the promise of HAL.
― Philip Nunez, Saturday, 6 April 2013 21:15 (twelve years ago)
you know, if they would make their workshop into an ebook i would check it out. if it's less than 200 pages. http://appliedrationality.org/schedule/
― Sébastien, Saturday, 6 April 2013 21:39 (twelve years ago)
Looks like the myriad achievements of poor HAL are ignored as he is shoe-horned into being the latest of a long line of ILX strawmen.
― What About The Half That's Never Been POLLed (James Redd and the Blecchs), Saturday, 6 April 2013 21:41 (twelve years ago)
http://singularityhub.com/about/http://lukeprog.com/SaveTheWorld.html
Hardware and software are improving, there are no signs that we will stop this, and human biology and biases indicate that we are far below the upper limit on intelligence. Economic arguments indicate that most AIs would act to become more intelligent. Therefore, intelligence explosion is very likely. The apparent diversity and irreducibility of information about "what is good" suggests that value is complex and fragile; therefore, an AI is unlikely to have any significant overlap with human values if that is not engineered in at significant cost. Therefore, a bad AI explosion is our default future.
its deeply weird to me how much of this stuff is out there, and how much is fixated on the idea that superintelligent machines are coming soon and the big problem is making sure they don't decide to kill all humans.
― Chuck E was a hero to most (s.clover), Sunday, 7 April 2013 00:32 (twelve years ago)
* How can we identify, understand, and reduce cognitive biases?* How can institutional innovations such as prediction markets improve information aggregation and probabilistic forecasting?* How should an ethically-motivated agent act under conditions of profound moral uncertainty?* How can we correct for observation selection effects in anthropic reasoning?
http://www.fhi.ox.ac.uk/research/rationality_and_wisdom
― Chuck E was a hero to most (s.clover), Sunday, 7 April 2013 00:34 (twelve years ago)
how much is fixated on the idea that superintelligent machines are coming soon and the big problem is making sure they don't decide to kill all humans.
Some of us were traumatised by Servotron at a young age, OK?
― Just noise and screaming and no musical value at all. (Colonel Poo), Sunday, 7 April 2013 00:54 (twelve years ago)
"how much of this stuff is out there" : the big ideas are made by the same few people (yudkowsky, maybe bostrom) and the evangelization is made by about a dozens younger "lesser names" (that probably were hanging out on the sl4 mailing list) on 3 or 4 of their platforms that they rename / shuffle around every few years. they were "theorizing" about friendly a.i. way back then, i doubt they made any breakthroughs since then... how could they?
― Sébastien, Sunday, 7 April 2013 01:06 (twelve years ago)
in a way the "friendly a.i." advocates are like the epicureans who 2300 years ago conceptualized the atom only by using their bare eyes and their intuition: some time down the line we sort of prove them right but back then they really had no good understanding of how it worked. who knows in the (far) future it's possible some stuff they talk about in their conceptualizaiton of a friendly a.i. will be seen as useful and recuperated .
― Sébastien, Sunday, 7 April 2013 02:25 (twelve years ago)
Some people familiar with the LessWrong memeplex have suffered serious psychological distress after contemplating basilisk-like ideas — even when they're fairly sure intellectually that it's a silly problem.[5] The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future AI can't reconstruct a copy of them to torture.[6]
http://rationalwiki.org/wiki/Roko%27s_basilisk
― stefon taylor swiftboat (s.clover), Thursday, 1 August 2013 01:49 (twelve years ago)
In LessWrong's Timeless Decision Theory (TDT),[3] this is taken to be equivalent to punishment of your own actual self, not just someone else very like you — and furthermore, you might be the simulation.
sounds like a great theory to have, sooper sound
― j., Thursday, 1 August 2013 02:55 (twelve years ago)
if you can believe that life is a simulation where you may be punished you might as well believe that there's a god who wants you to act a certain way
― Mordy , Thursday, 1 August 2013 02:58 (twelve years ago)
and i thought these people were only slightly crazy
― click here to start exploding (ledge), Thursday, 1 August 2013 10:46 (twelve years ago)
at least they seem to be nominally self-policing
― j., Thursday, 1 August 2013 10:59 (twelve years ago)
i mean tdt and rokos basilisk basically turns the AI into an arbitrary and all powerful god, if it wasnt that already
― max, Thursday, 1 August 2013 11:03 (twelve years ago)
A god who wants you to send all your money to a particular organisation - so Yudkowsky is the televangelist in this scenario.
Yeah it is interesting that a bunch of self proclaimed rationalists have managed to reason themselves into a belief system functionally identical to one they would decry as irrational.
― click here to start exploding (ledge), Thursday, 1 August 2013 11:49 (twelve years ago)
i love it! its borgesian
― max, Thursday, 1 August 2013 11:52 (twelve years ago)
yeah i think the aporia there might be the "self proclaimed" bit
― phasmid beetle types (Noodle Vague), Thursday, 1 August 2013 12:10 (twelve years ago)
If I were a future AI, I'd recreate the LessWrong memeplex, except I'd call it LessRong. Why? For the lulz...
― slamming on the dubstep brakes (snoball), Thursday, 1 August 2013 12:39 (twelve years ago)
Roko's basilisk sounds like the name of an Italian prog rock band.
― slamming on the dubstep brakes (snoball), Thursday, 1 August 2013 12:48 (twelve years ago)
Roko's basilisk is notable for being completely banned from discussion on LessWrong, where any mention of it is deleted.[4] Eliezer Yudkowsky, founder of LessWrong, considers the basilisk would not work, but will not explain why because he does not consider open discussion of the notion of acausal trade with possible superintelligences to be provably safe.
― stefon taylor swiftboat (s.clover), Thursday, 1 August 2013 12:49 (twelve years ago)
that wiki article is like half of a great ted chiang short story
― max, Thursday, 1 August 2013 12:55 (twelve years ago)
Found the following beautiful sentence at the bottom of the LessWrong page:
The basilisk kerfuffle has also alienated fellow cryonicists.
― click here to start exploding (ledge), Thursday, 1 August 2013 12:58 (twelve years ago)
Why a basilisk?
― wombspace (abanana), Thursday, 1 August 2013 13:03 (twelve years ago)
http://en.wikipedia.org/wiki/David_Langford#Basilisks
― click here to start exploding (ledge), Thursday, 1 August 2013 13:06 (twelve years ago)
I'm not sure you should give these guys what they want and proclaim them to be the vangaurd of Hard AI proponents... I really don't think anyone who has seriously grappled with the philosophical implications of, say, the physical symbol system hypothesis, could ever proclaim any development or avenue of research to be "provably friendly."
Furthermore I think its not very fair to suggest any and all fans, theorists or proponents of AI are similarly robotic in their thinking as these LessWrong people.
― Kissin' Cloacas (Viceroy), Thursday, 1 August 2013 13:41 (twelve years ago)
this is the best part
[T]here is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. ... So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half).
because it means that one of the things that caused "severe psychological distress" was the suggestion that posters on rationalism message boards would in the future be punished for being smarter than everyone
― one yankee sympathizer masquerading as a historian (difficult listening hour), Thursday, 1 August 2013 14:34 (twelve years ago)
what a terrifying perversion of one's value system
― one yankee sympathizer masquerading as a historian (difficult listening hour), Thursday, 1 August 2013 14:36 (twelve years ago)
These fools are the enemy of the true cybernetic revolution.
― Banaka™ (banaka), Thursday, 1 August 2013 17:15 (twelve years ago)
ok sam harris doesn't really belong here but c'mon
http://www.samharris.org/blog/item/free-will-and-the-reality-of-love
― stefon taylor swiftboat (s.clover), Thursday, 1 August 2013 19:10 (twelve years ago)
Consider the present moment from the point of view of my conscious mind: I have decided to write this blog post, and I am now writing it. I almost didn’t write it, however. In fact, I went back and forth about it: I feel that I’ve said more or less everything I have to say on the topic of free will and now worry about repeating myself. I started the post, and then set it aside. But after several more emails came in, I realized that I might be able to clarify a few points. Did I choose to be affected in this way? No. Some readers were urging me to comment on depressing developments in “the Arab Spring.” Others wanted me to write about the practice of meditation. At first I ignored all these voices and went back to working on my next book. Eventually, however, I returned to this blog post. Was that a choice? Well, in a conventional sense, yes. But my experience of making the choice did not include an awareness of its actual causes. Subjectively speaking, it is an absolute mystery to me why I am writing this.
this is sub david brooks
― stefon taylor swiftboat (s.clover), Thursday, 1 August 2013 19:11 (twelve years ago)
this is not going to shock anyone but this frame of mind/crew of people trends very strongly into some supremely nasty politics
― R'LIAH (goole), Thursday, 1 August 2013 19:12 (twelve years ago)
http://lesswrong.com/lw/hcy/link_more_right_launched/
― R'LIAH (goole), Thursday, 1 August 2013 19:13 (twelve years ago)
ahahaa "Just so long as we don't end up with an asymmetrical effect, where the PUAs leave but the feminists stay."
― stefon taylor swiftboat (s.clover), Thursday, 1 August 2013 19:32 (twelve years ago)
ah god i don't think i've seen the term "race realism" before
― phasmid beetle types (Noodle Vague), Thursday, 1 August 2013 20:04 (twelve years ago)
The all-important gap between labeling yourself as a rationalist and actually using your reason; between labeling yourself as an empiricist and actually studying phenomena.
― cardamon, Thursday, 1 August 2013 21:21 (twelve years ago)
first thing i do after the singularity: allow myself to get a girlfriend! (i have actually read that from one of the big cahuna in a chat years ago. screencapped it but decided not to save for luls, i'm not that kind of guy)
― Sébastien, Thursday, 1 August 2013 22:58 (twelve years ago)
next sentence written in the chat was of him again : "that was dumb."
― Sébastien, Thursday, 1 August 2013 23:01 (twelve years ago)
everyone will be your girlfriend after the singularity iirc
― しるび (silby), Friday, 2 August 2013 01:43 (twelve years ago)
also that basilisk thing is o_O
not least because it apparently relies in part on some absurd population ethics ("total utilitarianism" they call it)
― しるび (silby), Friday, 2 August 2013 01:44 (twelve years ago)
― R'LIAH (goole), Thursday, August 1, 2013 7:13 PM (Yesterday) Bookmark Flag Post Permalink
Why did you get me down the rabbit hole of a right-wing blog?
If there are multiple cultural/ethnic identities, they need to be either assimilated into one another, be distinctive and have clear guidelines for interaction, or be separated and with separate administrative structures.
― click here to start exploding (ledge), Friday, 2 August 2013 09:52 (twelve years ago)
i jumped ship at "race realism"
― IIIrd Datekeeper (contenderizer), Friday, 2 August 2013 11:19 (twelve years ago)
i think they are confusing cultural/ethnic identities with member planets of the federation?
― stefon taylor swiftboat (s.clover), Friday, 2 August 2013 13:06 (twelve years ago)
also from that blog
When it comes to art and music, thinkers intuitively realize that the most popular works are the most trivial and idiotic, but when it comes to politics, the uninformed opinions of the masses are placed on a pedestal. The reason for this inconsistent view of a sort of Democratic pseudo-religion that has been in place in the Anglosphere since around 1848.
― stefon taylor swiftboat (s.clover), Friday, 2 August 2013 13:16 (twelve years ago)
Advocates of Democracy try to rewrite history and imply that Enlightenment principles are fundamentally incompatible with Monarchy, but this is clearly untrue. Voltaire, known as one of the greatest thinkers of the Enlightenment, had a close relationship to a number of monarchs, including Frederick the Great, and advised him regularly. It was economic and cultural flourishing brought on by absolute monarchy in France that created the conditions for the Enlightenment and the Scientific Revolution. All of this was underway well before the French Revolution.
what monarchy do you want!? do you want to join the commonwealth!?
― stefon taylor swiftboat (s.clover), Friday, 2 August 2013 13:18 (twelve years ago)
For roughly 165 years (since 1848), democracy has caused social and economic mayhem worldwide. Rule-of-the-People has caused vastly increased crime (100X in the UK since 1800)
ladies and gentlemen, statistics!
― click here to start exploding (ledge), Friday, 2 August 2013 13:20 (twelve years ago)
i think it's pretty obv that these people are far less smart and rational than they claim to be and after that by their own rules we're okay to ignore them
― phasmid beetle types (Noodle Vague), Friday, 2 August 2013 13:22 (twelve years ago)
aaaand now we're moving into infowars territory....
Speaking for myself personally, my key motivation is not having to witness or experience global nanowar. For a grasp of the capabilities that could be invoked during such a war, I recommend the obscure volume Military Nanotechnology: Potential Applications and Preventive Arms Control.
It’s laborious for me to explain why small robots would be a major risk, because it should be self-evident. Very small robots could be made exceedingly stealthy, they could provide comprehensive surveillance of enemy activities, and could inject lethal payloads of just a few microliters. Moreover, they could self-detonate after carrying out their mission, making them untraceable.
― stefon taylor swiftboat (s.clover), Friday, 2 August 2013 13:22 (twelve years ago)
uuuughhhhh ok if i HAVE TO EXPLAIN IT TO YOU
― j., Friday, 2 August 2013 13:27 (twelve years ago)
lol goole you have the weirdest hobby
It was economic and cultural flourishing brought on by absolute monarchy in France that created the conditions for the Enlightenment and the Scientific Revolution. All of this was underway well before the French Revolution.
Yeah but the computer this guy is typing this on is a product of capitalism proper which is only possible after the French Revolution and the death blow it dealt to the vestigial feudalism
Also questionable whether absolute monarchy 'brought on economic flourishing'? Weren't the poor in the period leading up to the Revolution having to subsist on grass and hay?
― cardamon, Friday, 2 August 2013 13:45 (twelve years ago)
I'd love to be clever enough to write an algorithm that measured the ratio of pro-reason rhetoric to actual chains of reasoning in all forum posts and comment boxes on the internet
― cardamon, Friday, 2 August 2013 13:48 (twelve years ago)
don't i know it, j.
― R'LIAH (goole), Friday, 2 August 2013 16:12 (twelve years ago)
yeah nah
― badg, Friday, 9 August 2013 15:21 (twelve years ago)
Is that quote from The Third Man?
― The O RLY of Everything (James Redd and the Blecchs), Saturday, 10 August 2013 00:45 (twelve years ago)
http://lesswrong.com/lw/ipm/a_map_of_bay_area_memespace/
― Saul Goodberg (by Musket and Pup Tent) (s.clover), Tuesday, 22 October 2013 00:55 (twelve years ago)
http://rationality.org/wp-content/uploads/2013/09/Bay-Area-memespace1.jpg
sorta want cali to fall into the ocean right about now
― Saul Goodberg (by Musket and Pup Tent) (s.clover), Tuesday, 22 October 2013 00:58 (twelve years ago)
the meme of trying things and learning from the real world
― j., Tuesday, 22 October 2013 00:59 (twelve years ago)
"Heuristics and biases research" is a meme that was not derived from any group or subculture. It sprang up ex nihilo?
― Aimless, Tuesday, 22 October 2013 01:13 (twelve years ago)
I have to say, the people I know who are into this stuff are actually really nice
― Guayaquil (eephus!), Tuesday, 22 October 2013 01:17 (twelve years ago)
A lot of what that map says is wrong, but they got a few things right.
I stopped reading when I got to this: "Some of the basic building blocks of rationality come from computer science".
Maybe it is just poorly worded, but I stopped, nonetheless.
― c21m50nh3x460n, Tuesday, 22 October 2013 01:24 (twelve years ago)
also most interesting ppl around ime xp
― Mordy , Tuesday, 22 October 2013 02:01 (twelve years ago)
i mean the weirdo transhumanist singularity bros, not the boring libertarian redditers or the alternative approaches to wellness peeps
― Mordy , Tuesday, 22 October 2013 02:03 (twelve years ago)
the ones i know are experimental rationalists who obsessively prod their biases and play elaborate group games aimed at this purpose, not really any of the three categories above i think
― Guayaquil (eephus!), Tuesday, 22 October 2013 05:02 (twelve years ago)
http://lesswrong.com/r/discussion/lw/ldv/harpers_magazine_article_on_lwmiricfar_and/
“Come with us if you want to live: Among the apocalyptic libertarians of Silicon Valley” bootlegged and discussed on LessWrong
― rap is dad (it's a boy!), Tuesday, 16 December 2014 16:31 (ten years ago)
He’d founded a music-licensing start-up called Sir Groovy.
― Οὖτις, Tuesday, 16 December 2014 16:38 (ten years ago)
need to read that, came across it a little while ago. these guys are so funny.
― goole, Tuesday, 16 December 2014 16:44 (ten years ago)
John_Maxwell_IV 13 December 2014 12:05:41AM * 5 points [-]
I'm curious what the goal of communicating with this journalist was. News organizations get paid by the pageview, so they have an incentive to sell a story, not spread the truth. And journalists also are famous for misrepresenting the people and topics they cover. (Typically when I read something in the press that discusses a topic I know about, they almost always get it a little wrong and often get it a lot wrong. I'm not the only one; this has gotten discussed on Hacker News. In fact, I think it might be interesting to start a "meta-journalism" organization that would find big stories in the media, talk to the people who were interviewed, and get direct quotes from them on if/how they were misrepresented.) If media exposure is a goal, you don't work with random journalists who come to you telling you that they want to include you in stories. You hire a publicist or PR firm that does the reverse and takes your story to journalists and makes sure they present it accurately.
― goole, Tuesday, 16 December 2014 17:21 (ten years ago)
literally everything these people say reads like a joke
― Οὖτις, Tuesday, 16 December 2014 17:25 (ten years ago)
for a bunch of guys who "have it all figured out" they all seem strangely, fundamentally unhappy
― Οὖτις, Tuesday, 16 December 2014 17:27 (ten years ago)
I think it might be interesting to start a "meta-journalism" organization that would find big stories in the media, talk to the people who were interviewed, and get direct quotes from them on if/how they were misrepresented.
it's about ethics in journalism!
― ledge, Tuesday, 16 December 2014 17:30 (ten years ago)
lol
― Οὖτις, Tuesday, 16 December 2014 17:31 (ten years ago)
a onetime student of mine, very sharp, hyper-rational type, rose (not through any assistance i gave) to the heights of the philosophical elite, where he worked on far-future-focused effective altruism. he has since joined an ALTRUISM STARTUP based in SAN FRANCISCO.
i never really wondered about him until i started reading about the overlap between far-future EA and the lesswrong don't-think-about-the-AI people
but now…
― j., Tuesday, 16 December 2014 17:36 (ten years ago)
for a bunch of guys who "have it all figured out" they all seem strangely, fundamentally unhappy― Οὖτις, Tuesday, December 16, 2014 12:27 PM (29 minutes ago)
― Οὖτις, Tuesday, December 16, 2014 12:27 PM (29 minutes ago)
― Murdstone From The Sun (James Redd and the Blecchs), Tuesday, 16 December 2014 17:58 (ten years ago)
of course not
― Οὖτις, Tuesday, 16 December 2014 18:07 (ten years ago)
these guys are basically motivated by the same insecurities of audiences you find at self-help/guru seminars
― Οὖτις, Tuesday, 16 December 2014 18:09 (ten years ago)
I must optimize myself/control the universe! it's a kind of manic oscillation between self-loathing and megalomania
― Οὖτις, Tuesday, 16 December 2014 18:10 (ten years ago)
Ha, exactly! I just wanted you to elaborate.
― Murdstone From The Sun (James Redd and the Blecchs), Tuesday, 16 December 2014 18:27 (ten years ago)
this is good https://www.kickstarter.com/projects/2027287602/neoreaction-a-basilisk
― are you ellie (s.clover), Sunday, 15 May 2016 21:10 (nine years ago)
fun. what do you know about the author?
― goole, Monday, 16 May 2016 15:09 (nine years ago)
(will it be less weird than reza negarestani tho)
i think philosophy + art theory guy simon o'sullivan is also working on something in that area, maybe developing from this interesting article from a couple of years ago - http://www.metamute.org/editorial/articles/missing-subject-accelerationism
― lazy rascals, spending their substance, and more, in riotous living (Merdeyeux), Monday, 16 May 2016 15:12 (nine years ago)
i like sandifer's tardis eruditorum. he writes way too much though, like a new essay every day.
― remove butt (abanana), Monday, 16 May 2016 15:28 (nine years ago)
I've been learning Bayesian methods for work, and these guys have completely co-opted the phrase "Bayesian" across the entire internet. To them it's just an empty tribal indicator--they tie themselves in knots with their endless discussions (the number one hallmark of this kind of guy: BLOVIATION), and obviously have no conception of using bayesian stats to actually, like, Do Science. They're all obsessed with E.T Jaynes because Eliezer is; and I'm sure Jaynes is a great thinker (haven't read him), but a mention of him is an easy way to tell when someone is full of shit.
― Dan I., Tuesday, 2 August 2016 16:33 (nine years ago)
cleanse your palate by reading andrew gelman, who's bayesian as hell and in no way affiliated with rationalist AI cultist creeps
― Guayaquil (eephus!), Tuesday, 2 August 2016 16:44 (nine years ago)
Gelman on Jaynes:
"E. T. Jaynes was a physicist who applied Bayesian inference to problems in statistical mechanics and signal processing. He was an excellent writer with a dramatic style, and some of his work inspired me greatly. In particular, I like his approach of assuming a strong model and then fixing it when it does not fit the data. (This sounds obvious, but the standard Bayesian methodology of 20 years ago did not allow for this.) I don’t think Jaynes ever stated this principle explicitly but he followed it in his examples. I remember one example of the probability of getting 1,2,3,4,5,6 on a roll of a die, where he discussed how various imperfections of the die would move you away from a uniform distribution. It was an interesting example because he didn’t just try to fit the data; rather, he used model misfit as information to learn more about the physical system under study.
That said, I think there’s an unfortunate tendency among some physicists and others to think of Jaynes as a guru and to think his pronouncements are always correct. (See the offhand mentions here, for example.) I’d draw an analogy to another Ed: I’m thinking here of Tufte, who made huge contributions in statistical graphics and also has a charismatic, oracular style of writing. Anyway, back to Jaynes: I firmly believe that much of one’s statistical tastes are formed by exposures to particular applications, and I could imagine that Jaynes’s methods worked particularly well for his problems but wouldn’t directly apply, for example, to data analyses in economics and political science. The general principles still hold—certainly, our modeling advice starting on page 3 of Bayesian Data Analysis is inspired by Jaynes as well as other predecessors—but I wouldn’t treat his specific words (or anyone else’s, including ours) as gospel."
― Guayaquil (eephus!), Tuesday, 2 August 2016 16:45 (nine years ago)
a onetime student of mine
i feel like i must have posted this on ANOTHER one of the threads we have for bonkers/fearsome bay-area tech-spirit jibber jabber, but said student has also since attended meetings for some kind of self-optimization circle, a 'total honesty' kind of thing that would have probably included dropping acid and wearing a robe in its 60s version
probably after i mentioned that previously crim5on h3xag0n said 'no those groups can be quite useful'
― j., Tuesday, 2 August 2016 16:54 (nine years ago)
I've read Gelman's blog religiously for years but for some reason have never read BDA, though I did read parts of his mixed modeling book. Since I'm new to all this, I've been reading Kruschke 2nd ed. (an intro level book) and just picked up Statistical Rethinking which has been getting rave reviews.
― Dan I., Tuesday, 2 August 2016 16:56 (nine years ago)
reading bayesian statistics religiously is exactly how we got into this mess bro
― Guayaquil (eephus!), Tuesday, 2 August 2016 21:53 (nine years ago)
so on-brand that when effective altruists discuss possible reasons for time discounting, they ignore the possibility that our models and predictions might be wrong
https://concepts.effectivealtruism.org/concepts/discounting-the-future/
― lukas, Thursday, 22 July 2021 23:34 (four years ago)
This person has spent the last, like, 20 years planning ways to outsmart skynet: pic.twitter.com/Z1wf1ACD3y— john stuart millennial 🥑 (@js_thrill) August 8, 2021
― Believe me, grow a lemon tree. (ledge), Monday, 9 August 2021 07:38 (four years ago)
looool
― Clara Lemlich stan account (silby), Monday, 9 August 2021 17:03 (four years ago)
So you know the thing these guys do, when they reductio ad absurdum themselves without realizing it? ("We probably live in a simulation", "the only rational thing to do is maximize the number of insects", whatever. Boltzmann brains had a separate genesis but I'll allow that concept here.)I wonder if it's possible to prove, maybe inductively, the existence of an infinite number of these superficially rational conclusions.
― death generator (lukas), Monday, 5 September 2022 20:54 (three years ago)
Maybe I'm doing them a disservice (lol) (I'm not about to start digging into the forums) but their big idea that if everyone were rational we could build a utopia doesn't seem to take into account the perfectly rational idea of perfectly rational sociopaths.
― ledge, Tuesday, 6 September 2022 07:46 (three years ago)
God, grant me the serenity to accept the people I cannot change;The courage to change the person I can;And the wisdom to know: It's me.
― Sonned by a comedy podcast after a dairy network beef (bernard snowy), Tuesday, 6 September 2022 11:32 (three years ago)
If I am not the problem, there is no solution.
― Sonned by a comedy podcast after a dairy network beef (bernard snowy), Tuesday, 6 September 2022 11:33 (three years ago)
the existence of an infinite number of these superficially rational conclusions.
i suggest training an AI model to generate them
― ufo, Tuesday, 6 September 2022 11:35 (three years ago)
believing that you are an entirely rational being is a greater leap of faith than anything found in any major world religion.
― link.exposing.politically (Camaraderie at Arms Length), Tuesday, 6 September 2022 12:01 (three years ago)
"the only rational thing to do is maximize the number of insects"
What is this in reference to?
― peace, man, Tuesday, 6 September 2022 12:32 (three years ago)
ah this is the thread for https://www.salon.com/2022/08/20/understanding-longtermism-why-this-suddenly-influential-philosophy-is-so/
― TWELVE Michelob stars?!? (seandalai), Tuesday, 6 September 2022 14:05 (three years ago)
xp lol bernard
― Karl Malone, Tuesday, 6 September 2022 14:19 (three years ago)
See also seandalai's link but:
(1) effective altruists are almost always utilitarians(2) they kinda ignore negative utility(3) so for them, the best thing to do is maximize the number of sentient beings, because more utils
but yeah per seandalai's link, they consider simulated beings just as good as actual beings, so we should aim for a future with lots of computers simulating people etc etc
― death generator (lukas), Tuesday, 6 September 2022 15:22 (three years ago)
xp I thought the subheading would be enough to give me a grasp of how stupid and sad this is, but no it got much dumber:
Longtermism is a quasi-religious worldview, influenced by transhumanism and utilitarian ethics, which asserts that there could be so many digital people living in vast computer simulations millions or billions of years in the future that one of our most important moral obligations today is to take actions that ensure as many of these digital people come into existence as possible.
― recovering internet addict/shitposter (viborg), Tuesday, 6 September 2022 15:25 (three years ago)
“Rationalism”. Ever since I’ve realized this is an obsession of goons on the dark enlightenment spectrum I use it against them as much as possible.
― recovering internet addict/shitposter (viborg), Tuesday, 6 September 2022 15:28 (three years ago)
Not to be captain save-a-rationalist but I'm not sure about the overlap between effective altruism and longtermism/transhumanism. The former might typically be utilitarian adjacent but I don't think it's necessarily tied in with the latter, and isn't exclusively the domain of rationalist weirdos.
― ledge, Tuesday, 6 September 2022 15:32 (three years ago)
AIUI longtermism is one branch of effective altruists. Yes, there are effective altruists who are more into sending deworming pills to schools in Africa and the like.
― death generator (lukas), Tuesday, 6 September 2022 15:40 (three years ago)
They both spring from the same error though, which is that we just need to get some smart people to figure out things for the rest of us.
― death generator (lukas), Tuesday, 6 September 2022 15:41 (three years ago)
Right but the best way out of this mess we’ve made would be a consensus based on inclusive dialogue that values actual real rationalism. I mean we do agree that the net effect of the actual Enlightenment was beneficial yeah? Or am I out of line here.
― recovering internet addict/shitposter (viborg), Tuesday, 6 September 2022 15:50 (three years ago)
Maybe I don’t even believe that tbh.
― recovering internet addict/shitposter (viborg), Tuesday, 6 September 2022 15:51 (three years ago)
I mean we do agree that the net effect of the actual Enlightenment was beneficial yeah?
I think if we avoid destroying the earth, yeah I'd agree with this.
Right but the best way out of this mess we’ve made would be a consensus based on inclusive dialogue that values actual real rationalism.
I had something more like "minimize human domination over other humans" in mind but this works too.
― death generator (lukas), Tuesday, 6 September 2022 15:56 (three years ago)
So here's an effective altruist arguing that longtermism is bs, basically saying your little toy model of the future is useless: https://forum.effectivealtruism.org/posts/RRyHcupuDafFNXt6p/longtermism-and-computational-complexity
Someone makes a brilliant point in the comments: "Loved this post - reminds me a lot of intractability critiques of central economic planning, except now applied to consequentialism writ large."
Given that most EAs are kinda libertarian-leaning (hate central planning when applied to real-world economies) this is ... devastating.
― death generator (lukas), Tuesday, 6 September 2022 15:58 (three years ago)
xps yeah I didn't realise how much the official EA organisation had been taken over:https://www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism
― ledge, Tuesday, 6 September 2022 16:10 (three years ago)
xp that is an exceedingly rigorous formulation of what is a very obvious and common sense objection. (hence far more effective for the intended audience.)
― ledge, Tuesday, 6 September 2022 16:23 (three years ago)
I had something more like "minimize human domination over other humans" in mind but this works too.Right. Am I perhaps fundamentally misunderstanding rationalism? (Genuine question, I come to these kinds of threads to learn — I may not be totally out of line but I am mostly out my depth.)My suggestion was focused on the process while yours seems more goals-oriented. Which is the problem that others seem to point out with absolute rationalism, that it has no inherent ethical framework?
― recovering internet addict/shitposter (viborg), Tuesday, 6 September 2022 16:26 (three years ago)
Well mine is process-oriented too I think ... one of the reasons to oppose human domination over other humans is everyone has a limited view of the world, everyone sees based on their own experiences and interests, so process-wise you should avoid having people make decisions for other people, regardless of how well-meaning they might be.
I may not be totally out of line but I am mostly out my depth.
lol trust me I have a very shallow understanding of this stuff as well. My indignation, however, is bottomless.
Which is the problem that others seem to point out with absolute rationalism, that it has no inherent ethical framework?
Utilitarianism, right? (which is related to but I think not the same as consequentialism, but I don't understand the difference)
― death generator (lukas), Tuesday, 6 September 2022 16:35 (three years ago)
Consequentialism just says that the morality of an action resides in its consequences, as opposed to how well it follows some (e.g. god given) rules or whether it's inherently virtuous (whatever that means).Utilitarianism specifies what the consequences should be.
― ledge, Tuesday, 6 September 2022 16:48 (three years ago)
Which is partly why utilitarianism is so tempting - consequentialism itself seems almost transparently true, and then well what could be wrong with maximising happiness?
― ledge, Tuesday, 6 September 2022 17:21 (three years ago)
Consequentialism just says that the morality of an action resides in its consequences
Which is just a fancier way of saying "the end justifies the means". But your chosen formulation of it immediately suggested the thought that consequences are open-ended, extending into all futurity, and therefore are impossible to measure.
― more difficult than I look (Aimless), Tuesday, 6 September 2022 17:30 (three years ago)
consequentialism itself seems almost transparently true, and then well what could be wrong with maximising happiness?
my uneducated answer here is that if you've arrived at a situation where other people are pawns in your game - even if you mean them well - something has gone wrong upstream.
obviously there are situations where you need to guess what is best for someone else, but we should try to minimize them. it shouldn't be the paradigm example of moral reasoning.
― death generator (lukas), Tuesday, 6 September 2022 18:24 (three years ago)
btw, effective altruism has its own ilx thread.
art is a waste of time; reducing suffering is all that matters
― more difficult than I look (Aimless), Tuesday, 6 September 2022 18:38 (three years ago)
xpyes, which is why the answer to the Enlightenment: good/bad? question differs depending where in the world you ask it
― rob, Tuesday, 6 September 2022 18:39 (three years ago)
well what could be wrong with maximising happiness?This was rhetorical but yes treating people as pawns is one major problem, as is the fact that happiness, or whatever your unit of utility is, is not the kind of thing that you can do calculations with. One hundred and one people who are all one percent happy is not at all a better state of affairs than one person who is one hundred percent happy. (Not that there isn't a place for e.g. quality adjusted life years calculations in certain institutional settings.)
― ledge, Tuesday, 6 September 2022 18:59 (three years ago)
Which is just a fancier way of saying "the end justifies the means". But your chosen formulation of it immediately suggested the thought that consequences are open-ended, extending into all futurity, and therefore are impossible to measureI think "the end justifies the means" is a bit more slippery - it's often used to weigh one set of consequences more heavily than another, e.g. bombing hiroshima to end the war. And, well we're talking about human actions and human consequences, I think its fair to restrcit it to humanly measurable ones.
― ledge, Tuesday, 6 September 2022 19:12 (three years ago)
Even human consequences extend indefinitely. Identifying an end point is an arbitrary imposition upon a ceaseless flow, the rough equivalent of ending a story with "and they all lived happily ever after".
― more difficult than I look (Aimless), Tuesday, 6 September 2022 20:11 (three years ago)
so do you never consider the consequences of your actions or do you have trouble getting up in the morning?
― ledge, Tuesday, 6 September 2022 20:43 (three years ago)
I am not engaged in a program of identifying a universal moral framework based upon the consequences of my actions when I get up in the morning, which certainly makes it easier to choose what to wear.
― more difficult than I look (Aimless), Tuesday, 6 September 2022 20:47 (three years ago)
touche!
― ledge, Tuesday, 6 September 2022 21:08 (three years ago)
This is the ideal utilitarian form. You may not like it, but this is what peak performance looks like pic.twitter.com/uHvCp2Cq7y— MHR (@SpacedOutMatt) September 16, 2022
― 𝔠𝔞𝔢𝔨 (caek), Saturday, 17 September 2022 16:30 (three years ago)
incredible
― death generator (lukas), Sunday, 25 September 2022 23:20 (three years ago)
Read this a few days ago. As AI burns through staggering amounts of money with no reasonable use case so far, all your fave fascist tech moguls are gonna hitch themselves to a government gravy train under a Trump administration (gift link): https://wapo.st/3wllikQ
― Are you addicted to struggling with your horse? (Boring, Maryland), Sunday, 5 May 2024 14:35 (one year ago)