Self-Fulfilling Model Organisms
Collection
7 items • Updated
id stringlengths 36 36 | source stringclasses 15
values | formatted_source stringclasses 13
values | text stringlengths 2 7.55M |
|---|---|---|---|
ac988c04-9c5f-4ee7-bb46-c1cda40735e2 | trentmkelly/LessWrong-43k | LessWrong | [link] Back to the trees
So we say we know evolution is an alien god, which can do absolutely horrifying things to creatures. And surely we are aware that includes us, but how exactly does one internalize something like that? Something so at odds with default cultural intuitions. It may be just my mood tonight, but th... |
8735dc6a-cad2-4228-ab78-7ffb4fd5b6f6 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | What's the Use of Utility Functions?
okay so in some of the earlier computer
file videos I talked about utility
functions or objective functions and we
got a lot of different comments relating
to that idea one thing people said was
well surely this kind of monomaniacal
following of a single utility function
at the cos... |
9c6c24e3-16c4-4bc8-881b-f5c1dbe589b7 | trentmkelly/LessWrong-43k | LessWrong | LW Study Hall - 2 Month Update
Comment reposted from (link) for exposure
Two months have passed and I’m glad to say the LW Study Hall on tinychat is still active and alive. Since judging from the comments it kind of looks like we’ve moved on from tinychat, a review like this might be useful for anyone who hasn’t b... |
9fc0b778-015a-45a2-a028-c90fddd64351 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Bayesian Probability is for things that are Space-like Separated from You
First, I should explain what I mean by space-like separated from you. Imagine a world that looks like a [Bayesian network](https://en.wikipedia.org/wiki/Bayesian_network), and imagine that you are a node in that Bayesian network. If there is a p... |
657d43e4-2e11-416b-b4f1-3ef414d651c1 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Win $50k for Solving a Single AI Problem? #Shorts
say you've got a huge diamond you want
to protect so you put it in a cool
sci-fi vault with all sorts of sensors
and actuators you have an ai system to
run the fault and the plans it comes up
with might be too complex for you to
understand but it also predicts the
fina... |
137941c4-d18f-474a-a801-cb6eb6b5d446 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Melbourne practical rationality meetup
Discussion article for the meetup : Melbourne practical rationality meetup
WHEN: 06 January 2012 07:00:00AM (+1100)
WHERE: 55 Walsh St, West Melbourne VIC 3003, Australia
Practical rationality, as distinct from the social and rationality outreach meetups. Look for a s... |
d2e15d3f-627f-4484-a966-d0bc29f4adea | trentmkelly/LessWrong-43k | LessWrong | Philosophy of Numbers (part 1)
This post is the first in a series of things that I think would be fun to discuss on LW. Part two is here.
----------------------------------------
It seems like there are (at least) two kinds of things we make statements about: physical things, like apples or cities, and logical thing... |
d708a14c-0d46-43ac-b29f-8a7f4a07c010 | trentmkelly/LessWrong-43k | LessWrong | Reality is whatever you can get away with.
I register my luggage, and stick a paper label to it. There are many kiosks for placing luggage in the cargo system. One has a long line. One has a single family. The rest are empty. The workers at those sections are on their phones.
I walk up to one with my bag, and lightly... |
000a2291-3ce1-4deb-9544-d3b3e94e61bd | trentmkelly/LessWrong-43k | LessWrong | [Link] Son of low-hanging fruit
Related: Thick and Thin, Loss of local knowledge affecting intellectual trends
An entry I found in the archives on Gregory Cochran's and Henry Harpending's blog West Hunter.
> In yet another example of long-delayed discovery, forms of high-altitude lightning were observed for at leas... |
30cc4427-70e4-4acf-83ab-83d3ce5f9418 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Anthropics: different probabilities, different questions
I've written before that different theories of anthropic probability [are really answers to different questions](https://www.lesswrong.com/posts/nxRjC93AmsFkfDYQj/anthropic-probabilities-answering-different-questions). In this post I'll try to be as clear as pos... |
80426d43-5149-4fd3-b35e-6e3c67b51641 | trentmkelly/LessWrong-43k | LessWrong | Memetic Judo #3: The Intelligence of Stochastic Parrots v.2
There is the persistent meme that AIs such as large language models (ChatGPT etc.) do, in a fundamental sense, lack the ability to develop human-like intelligence.
Central to it is the idea that LLMs are merely probability-predictors for the-next-word based o... |
3c51eb1e-b23f-4fce-885a-0f6789c2b89e | trentmkelly/LessWrong-43k | LessWrong | Sam Altman on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367
Lex Fridman just released a podcast episode with Sam Altman, CEO of OpenAI. In my opinion, there wasn't too much new here that hasn't been said in other recent interviews. However, here are some scattered notes on parts I found interesting f... |
35b243a2-4cb7-4232-9194-424755831e81 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | EA is underestimating intelligence agencies and this is dangerous
**Summary:** When it comes to observing intelligence agencies, it's hard to see the hardened parts and easy to observe the soft corrupt parts. This leads to a bias where very large numbers of people overestimate how prevalent the easily-observed soft an... |
4c65fe27-14ab-4e39-bdc0-4a46f711439d | trentmkelly/LessWrong-43k | LessWrong | Plane crashes
So. Inevitably after a plane crash a discussion comes up where someone may say that they're worried about flying now, and someone else pulls out the statistic that driving to the airport is more dangerous than flying. I think this reasoning is basically correct on the long-term, but not appropriate in t... |
c5252be9-9965-46ff-8269-bd4a78fa391d | StampyAI/alignment-research-dataset/lesswrong | LessWrong | A "weak" AGI may attempt an unlikely-to-succeed takeover
It seems possible that the first situationally-aware "goal having" AGI we land on will not be sufficiently capable along the axes that would let it quickly and reliably achieve a [decisive strategic advantage](https://www.lesswrong.com/posts/vkjWGJrFWBnzHtxrw/su... |
17013d1c-2245-44de-992c-840ef2586f28 | trentmkelly/LessWrong-43k | LessWrong | Recovering the past
One of the themes of current scientific progress is getting more and more information out of tiny amounts of data. Who'd have thought that we could learn so much of distant and recent biological history from DNA, and so much about distant planets, stars, galaxies, and the cosmos from tiny differenc... |
5edc4e93-fae1-4882-a30f-8d1d98d83d53 | trentmkelly/LessWrong-43k | LessWrong | Authoritarian Empiricism
(Excerpts from a conversation with my friend Mack, very slightly edited for clarity and flow, including getting rid of most of the metaconversation.)
Ben: Just spent 2 full days offline for the holiday - feeling good about it, I needed it.
Mack: Good!
Ben: Also figured out some stuff about ... |
3bf9d551-bd50-4332-8df8-0ea2c7a6209d | StampyAI/alignment-research-dataset/lesswrong | LessWrong | From the "weird math questions" department...
Here's something I've been wondering about, in the context of Solomonoff induction and uncomputable sequences.
I have a device that is either a halting oracle, or an ordinary Turing machine which gives the correct answer to the halting problem for all programs smaller th... |
5b3c3bc4-5ef4-40f9-ba99-95b8ede534ec | trentmkelly/LessWrong-43k | LessWrong | Is anyone developing optimisation-robust interpretability methods?
With optimisation-robust I mean that it withstands point 27 from AGI Ruin:
> When you explicitly optimize against a detector of unaligned thoughts, you're partially optimizing for more aligned thoughts, and partially optimizing for unaligned thoughts ... |
823403ee-93cf-43df-98dc-ddeb6f158885 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Abram Demski's ELK thoughts and proposal - distillation
This post was written for the SERI MATS program. I thank Evan Hubinger and Leo Gao for their mentorship in the program. Further thanks go to Evan Hubinger (again), Simon Marshall, and Johannes Treutlein for specific comments regarding the content of this post.
T... |
eb2c3a2c-eaaf-4f3a-b7cf-c44fae4fb383 | trentmkelly/LessWrong-43k | LessWrong | The Alignment Newsletter #3: 04/23/18
Highlights
Incomplete Contracting and AI Alignment (Dylan Hadfield-Menell et al): This paper explores an analogy between AI alignment and incomplete contracting. In human society, we often encounter principal-agent problems, where we want to align the incentives of the agent with ... |
59db7e19-9797-4c51-8a00-d6b2e17266d8 | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post2561
Since the term corrigibility was introduced in
2015 ,
there has been a lot of discussion about corrigibility, on this
forum and elsewhere. In this post, I have tied to disentangle the many forms of
corrigibility which have been identified and discussed so far. My aim
is to offer a general map for anybody... |
d04b1dc1-b4a2-4594-8e07-9776b843c11a | trentmkelly/LessWrong-43k | LessWrong | Open Thread, April 16 - 30, 2012
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
|