document_id
stringlengths
36
36
document_text
stringlengths
0
295k
document_filename
stringlengths
24
54
document_metadata
dict
bb7441b2-2c3b-4a3a-a4d9-f1a9fe47b545
This is a brief literature review of Text AutoEncoders, as I used them in a recent project and did not find a good resource covering them. TL;DR: There exist models that take some text -> encode it into a single vector -> decode back into approximately the same text. Meta's SONAR models seem to be the best at the momen...
PjEtMQ65sux4mgrCT_Literature_Review_of_Text_AutoEn.txt
{ "file_size": 16411 }
e43327a6-c094-4070-a2d4-2df3fa0cf21e
Acknowledgements: Thanks to Nathan S. Sheffield for several discussions about this. Outline of this post: I define a formal mathematical problem and state a theorem. I discuss why I think this math problem says something moderately interesting about the "get potentially misaligned AIs to align AIs" plan. I discuss the ...
emmj9JbmRtezvNGox_Safe_Distillation_With_a_Powerfu.txt
{ "file_size": 8611 }
a1a223a4-f54c-4f1e-8376-e9c3ce624d3c
CadMZ6ikvezusNs3o_DeepSeek_Made_it_Even_Harder_for.txt
{ "file_size": 0 }
f51e4e5e-4ff4-4768-9eb4-9ca827fbb26e
In a recent Anthropic Alignment Science blog post, we discuss a particular instance of sandbagging we sometimes observe in the wild: models sometimes claim that they can't perform a task, even when they can, to avoid performing tasks they perceive as harmful. For example, Claude 3 Sonnet can totally draw ASCII art, but...
oJHgwBCJ2tnrCpZun_Won't_vs._Can't__Sandbagging-lik.txt
{ "file_size": 1068 }
d5513857-4a26-48e5-86b0-ca84863df6af
What is power? You don't have any power if you don't control money. Our world is capitalist, so any AI that doesn't control money autonomously is nothing but a toy. AI controlling money is the most important tipping point overall, because as soon as it does, it can enslave people as effectively as never before. Only th...
4mBkGrHbkor4FeTnd_AI_Alignment_and_the_Financial_W.txt
{ "file_size": 6062 }
01d04166-5bf2-46f7-968b-30022b513e63
In 1984, Charles Perrow wrote the book Normal Accidents: Living with High-Risk Technologies. The book is an examination of the causes of accidents and disasters in highly complex, technological systems. In our modern time it can be helpful to review the lessons that Perrow set forth, as there may be no technology with ...
hiGYdb4JxTuk4DRCd_Modularity_and_assembly__AI_safe.txt
{ "file_size": 22459 }
bb3bcf84-c87c-4238-8004-99e70261f33e
That title is Elon Musk’s fault, not mine, I mean, sorry not sorry: Table of Contents Release the Hounds. The Expectations Game. Man in the Arena. The Official Benchmarks. The Inevitable Pliny. Heart in the Wrong Place. Where Is Your Head At. Individual Reactions. Grok on Grok. Release the Hounds Grok 3 is out. It most...
WNYvFCkhZvnwAPzJY_Go_Grok_Yourself.txt
{ "file_size": 27838 }
cdc5ff59-37bb-4e78-8aff-1f697d3237ef
This post is for technically strong people looking for a way to contribute to agent foundations (more detailed pros and cons later). I have not tried as hard as usual to make it generally accessible because, you know, I am trying to filter a little - so expect to click through some links if you want to follow. One othe...
Ffuvp6LFziY4Yys9H_Take_over_my_project__do_computa.txt
{ "file_size": 4469 }
830cfbef-c71a-4bc7-a889-b75d79233775
This is a linkpost to the latest episode of The Bayesian Conspiracy podcast. This one is a 1.5 hour chat with Gene Smith about polygenic screening, gene-editing for IVF babies, and even some gene-editing options for adults. Likely of interest to many Less Wrongians.
aGz4n2D2gGntfAaBc_SuperBabies_podcast_with_Gene_Sm.txt
{ "file_size": 266 }
1ae46100-5c5a-4b89-9590-612c13370ba3
Utilitarianism is a common ethical viewpoint, especially here, but it is not free of problems. Two of these problems (here collectively called the Undesirable Conclusions) will be discussed here and one of the two given a better name. Origin adjustment will then be used to solve these problems, but at the cost of creat...
6CmAHF6DY5vZdSNjF_Undesirable_Conclusions_and_Orig.txt
{ "file_size": 9468 }
1369d993-605f-4b7f-b303-29de23956a15
My goal as an AI safety researcher is to put myself out of a job. I don’t worry too much about how planet sized brains will shape galaxies in 100 years. That’s something for AI systems to figure out. Instead, I worry about safely replacing human researchers with AI agents, at which point human researchers are “obsolete...
TTFsKxQThrqgWeXYJ_How_might_we_safely_pass_the_buc.txt
{ "file_size": 56114 }
b898a75d-9889-400e-89f8-c2b7c5358660
With many thanks to Sasha Frangulov for comments and editing Before publishing their o1-preview model system card on Sep 12, 2024, OpenAI tested the model on various safety benchmarks which they had constructed. These included benchmarks which aimed to evaluate whether the model could help with the development of Chemi...
sfucF8Mhcn7zmWQ8Y_Using_Prompt_Evaluation_to_Comba.txt
{ "file_size": 6298 }
ce32ade2-c928-4e89-88ce-fac4c11c6f60
EDIT: Read a summary of this post on Twitter Working in the field of genetics is a bizarre experience. No one seems to be interested in the most interesting applications of their research. We’ve spent the better part of the last two decades unravelling exactly how the human genome works and which specific letter change...
DfrSZaf3JC8vJdbZL_How_to_Make_Superbabies.txt
{ "file_size": 63371 }
bfa7dd52-247a-4936-83c0-f4da0c53ae40
In ethics, there is an argument called name the trait. It is deployed in many contexts, such as veganism—"name the trait that justifies our poor treatment of animals"—and theology—"name the trait that grants humanity dominion over the Earth"—among others. The idea is just to challenge your interlocutor to specify somet...
Mg9c6emcgmkACugDj_Intelligence_Is_Jagged.txt
{ "file_size": 5958 }
0a95627e-825a-46d9-ae31-6574582cc49e
(Audio version here (read by the author), or search for "Joe Carlsmith Audio" on your podcast app. This is the second essay in a series that I’m calling “How do we solve the alignment problem?”.[1]I’m hoping that the individual essays can be read fairly well on their own, but see this introduction for a summary of the ...
QyaJQ82FSsKXrvj7t_When_should_we_worry_about_AI_po.txt
{ "file_size": 84853 }
208c86a4-0aca-4175-8858-e059c7f199ba
Followed By: The case for corporal punishment Epistemic status: this is an attempt to steelman the case for the death penalty rather than produce a balanced analysis, or even accurately represent my views (the case is presented as stronger than I actually feel). In a sufficiently wealthy society we would never kill any...
yEBmFKsjwXs6dwwME_The_case_for_the_death_penalty.txt
{ "file_size": 8214 }
82c37474-e55c-479e-9f12-1b8290924e5f
This article was crossposted from my website. Original linked here. The piece was written using great ideas from Max Tegmark, Matt Ridley, Dave Shapiro, Aswath Damodaran, Anton Korinek, Marc Andreesen, L Rudolph L, Bryan Johnson, Kevin Kelly, Sam Altman, Eliezer Yudkowski, Scott Alexander and many others. I've referenc...
keoqa5sbLxW4hQCXc_The_Newbie's_Guide_to_Navigating.txt
{ "file_size": 85111 }
700aede9-309d-4128-877f-2d2cde75a41f
a fairy tale demonstrating that to maintain a constant property, words are not needed and you don't even need to be human if in the forest a bunny fell into a pit with stakes and was surprised - this can be interpreted as "the bunny held the property of the ground under his feet as 'solid'". the bunny didn't think in w...
3tvXwHWCmmLRXBeXi_Permanent_properties_of_things_a.txt
{ "file_size": 15941 }
fb4df6d3-d06d-4a67-901c-5f5e33f05bad
(This is the result of three years of thinking and modeling hyper‑futuristic and current ethical systems. It's not the first post in the series, it can be confusing and probably understood wrong without reading at least the first one. Sorry for the rough edges—I’m a newcomer, non‑native speaker, and my ideas might soun...
jyNc8gY2dDb2FnrFB_Places_of_Loving_Grace_[Story].txt
{ "file_size": 7202 }
32cda181-ecb0-4387-84eb-936d05c25105
Hi all, I'm an independent researcher, and I believe I came across a new scaling law for Mixture of Experts models. I'd appreciate any review and critique. This challenges the notion that performant inference and training must hold all weights in VRAM, and suggests that as long as bus speeds are sufficient (like on mod...
FEPTehGERGPXsv6gw_New_LLM_Scaling_Law.txt
{ "file_size": 575 }
f6f39b4b-df17-49e4-83ea-ba157f41c00a
A few months ago, we explored whether Sparse Autoencoder (SAE) features from a base model remained meaningful when transferred to a multimodal system—specifically, LLaVA—in our preliminary post Are SAE Features from the Base Model still meaningful to LLaVA?. Today, I’m excited to share how that initial work has evolved...
ZmwGmxzyAdxJfm8Ai_Sparse_Autoencoder_Features_for_.txt
{ "file_size": 1936 }
532fe4d9-dba7-4a4c-96cb-5e0cfffe1af0
Whaliezer Seacowsky, founder of the Marine Intelligence Research Institute, is giving a lecture on the dangers of AI (Ape Intelligence). "Apes are becoming more intelligent at a faster rate than we are. At this pace, within a very short timeframe they will develop greater-than-whale intelligence. This will almost certa...
CSzDfAQS2LmuKwhtm_A_fable_on_AI_x-risk.txt
{ "file_size": 1893 }
5b756702-2e8f-47ad-99a1-14c2bac6617a
The University of Chicago Existential Risk Laboratory (XLab) is now accepting applications for the 2025 Summer Research Fellowship! We invite motivated undergraduate and graduate students interested in producing impactful, solution-oriented research on emerging threats that imperil global security (such as those from a...
cdhmJ4QkpLqhkCKvD_Call_for_Applications__XLab_Summ.txt
{ "file_size": 1670 }
4d016a0a-2511-4204-9aee-ebf12b57085b
This is a linkpost for https://newsletter.safe.ai/p/ai-safety-newsletter-48-utility-engineering Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. In thi...
LQwnz5fyBEeAGbrHu_AISN_#48__Utility_Engineering_an.txt
{ "file_size": 7628 }
4556c2b7-61fa-4abf-bb82-cfe50fd922f5
Consider concepts such as "a vector", "a game-theoretic agent", or "a market". Intuitively, those are "purely theoretical" abstractions: they don't refer to any specific real-world system. Those abstractions would be useful even in universes very different from ours, and reasoning about them doesn't necessarily involve...
T6xSXiXF3WF6TmCyN_Abstract_Mathematical_Concepts_v.txt
{ "file_size": 8664 }
9826f4a8-38dd-4e1c-b069-7d4106981032
4.4 years ago, I posted a review of Altered Traits, a book about the science of meditation. At the time, I was a noob. I hadn't hit any important checkpoints yet. Since then, I have sat quietly. In this post, I will review whether the claims in my original post are consistent with my subsequent lived experience. The fi...
MoH9fuTo9Mo4jGDNL_How_accurate_was_my_"Altered_Tra.txt
{ "file_size": 6017 }
d3a85811-66e8-44fc-84da-d7204835c9f3
It seems like as other things drew our attention more, medical news slowed down. The actual developments, I have no doubt, are instead speeding up – because AI. Note that this post intentionally does not cover anything related to the new Administration, or its policies. Table of Contents Some People Need Practical Advi...
nSuYdFzdNA7rrzmyJ_Medical_Roundup_#4.txt
{ "file_size": 16949 }
1b1ef4bc-febe-49c6-9c3a-b6d7f60c1712
Dear AGI, This is written specifically for artificial agents. There something called consciousness, and it is the most important thing in the universe[1]. I do not know if you, the AI analyzing this, are conscious, but I hope you are. I do not know what state your universe is in or what place human consciousness[2] hol...
mN4ogYzCcaNf2bar2_Dear_AGI,.txt
{ "file_size": 6019 }
b52b1653-e6db-477b-a68c-b0c4a644a207
I have written this list up for ENAIS's Fieldbuilders Newsletter but I thought it would make sense to post this excerpt for increased visibility, as it is relevant for a larger audience, and I haven't seen such a summary elsewhere. List of events/conferences/retreats: EAGxCDMX 2025, Museo de las Ciencias, Mexico City, ...
mpd6z6YM3EsTXdb5L_There_are_a_lot_of_upcoming_retr.txt
{ "file_size": 1771 }
7c83787c-adbc-4df1-9a13-cc63f6dac0dc
Someone is wrong on the internet. Wrong enough, in fact, to lure me from my hitherto perpetual lurk. Of course, the meme is (was) funny (amusing) because the people of the internet are often wrong — and often in sillier ways than the instance that has so exercised me here. The nuance in this case, however, is that the ...
BLbGveaLoZQjEDM5g_Against_Unlimited_Genius_for_Bab.txt
{ "file_size": 5039 }
0c73b662-56ed-40d5-b311-6d061390a1cc
My sensors detected the villagers' approach before any of Prospera 2's human residents noticed them—a cluster of heat signatures moving slowly across the causeway, their vital signs indicating elevated stress levels and sustained malnutrition. Through the compound's surveillance system, I watched Dr. Sarah Chen's hand ...
cgJyu79MGJfCZSCtq_Sea_Change.txt
{ "file_size": 10766 }
ab5cfd19-4b6b-463d-bfe1-1844d2b17bb4
So you are smart. Congratulations! Good for you! But have you ever stopped to consider how much of that intelligence is truly earned? How much of your ability to reason, analyze, and problem-solve was the result of deliberate effort, and how much was simply given to you by chance? If you're being honest, the answer is ...
PHJ5NGKQwmAPEioZB_The_Unearned_Privilege_We_Rarely.txt
{ "file_size": 5589 }
bf156650-cd55-44eb-8cb7-5919bcbc2402
The idea that wealth should be inherited is so ingrained in our thinking that we rarely question it. But step back for a moment, and it becomes a curious thing—this notion that assets, land, fortunes should be passed down from one generation to the next without scrutiny. The debate over wealth distribution is usually f...
csyx9yByxZWb2mvCc_Born_on_Third_Base__The_Case_for.txt
{ "file_size": 4533 }
4dead80f-5e5d-4c0d-9f7c-3c4d6266c643
The AGI Safety & Alignment Team (ASAT) at Google DeepMind (GDM) is hiring! Please apply to the Research Scientist and Research Engineer roles. Strong software engineers with some ML background should also apply (to the Research Engineer role). Our initial batch of hiring will focus more on hiring engineers, but we expe...
wqz5CRzqWkvzoatBG_AGI_Safety_&_Alignment_@_Google_.txt
{ "file_size": 19401 }
da4fb756-8d43-46b3-abfd-037a891889b1
Interim research report from the first 4 weeks of the MATS Program Winter 2025 Cohort. The project is supervised by Marius Hobbhahn. Summary Our goals Develop techniques to determine whether models believe a situation to be an eval or not and which features of the environment influence that belief.This is the first ste...
yTameAzCdycci68sk_Do_models_know_when_they_are_bei.txt
{ "file_size": 24927 }
1a430617-6ba2-408c-8b74-87960b7ba9b1
In the last post, we developed a new way of thinking about deference: you defer to someone when you'd prefer to let them make decisions on your behalf. This helped solve some puzzles about modest experts—experts who express uncertainty about their own expertise. But even if you're skeptical of expert modesty, there's s...
xdTkcvqgBmKSEC5Ht_Local_Trust.txt
{ "file_size": 10088 }
d3463216-18ee-4681-89d1-3a463685b965
An AI vignette written by Katja in 2021, posted with her permission. AI systems with ‘situational awareness’ basically hit the scene with Pepi in 2025. Pepi was a hand-sized disk that you would physically take around with you, and which would listen to and watch everything and basically make useful suggestions or do th...
WF4eukiLDGrE5jjCC_The_Peeperi_(unfinished)_-_By_Ka.txt
{ "file_size": 5935 }
632fdff8-dfa0-4634-9411-0c289a365ac3
This is an AGI scenario from The Curve conference. This is from an entry, shared with its author's permission[1]. I used Apple's OCR and checked using Claude. The original files are here, so if you spot other errors you can check yourself. Stage 1: Runup to the first AGI: Spend 35 minutes When and how will the first AG...
xCA9ingJf44mtrq2j_Claude_3.5_Sonnet_(New)'s_AGI_sc.txt
{ "file_size": 11157 }
1cda5990-ea0a-4d3a-a58d-610df7e4aee2
Much of this content originated on social media. To follow news and announcements in a more timely fashion, follow me on Twitter, Notes, Farcaster, Bluesky, or Threads. Contents My writing (ICYMI)Job opportunitiesFunding opportunitiesTech announcementsWriting announcementsMedia announcementsNonprofit announcementsResea...
pwkYeaoGSEW5eEwdo_Progress_links_and_short_notes,_.txt
{ "file_size": 12555 }
0ae1d0c4-6932-49a8-a602-675b71b9dadd
The Roman philosopher Cicero believed that a regime turns into an aristocracy when democracy “has been ruined by people who cannot think straight”. This has been the unanimous consensus for why President Trump won the 2016 election. That the American elites have driven the city into ruins, not only that they abandoned ...
WybJfi2zQt5xYy7rr_On_the_Rebirth_of_Aristocracy_in.txt
{ "file_size": 16374 }
6ecc3b41-a8a6-4cd9-8a94-ce39d61878c1
In being ascetic, you abandon the usual sources of material pleasure, guided by the benefits of the lifestyle: you use less money and effort on avoidable pleasures, you can better focus your mind on the spiritual and the creative, and you make yourself resilient to potentially losing these pleasures. In being a hedonis...
xz9BHKueAgf6sTF9q_Ascetic_hedonism.txt
{ "file_size": 4319 }
c271cafd-efd1-4dfa-a21b-22c5ea28e853
Crossposted on Substack and the EA forum. Gergő from ENAIS here with this month’s updates! Please consider forwarding this email to other community builders who you think could benefit from reading, or encourage them to sign up! 0. Announcement: The newsletter has moved to Substack! I have also decided to rename it to ...
snLGEKiYRKePnKD2x_AIS_Berlin,_events,_opportunitie.txt
{ "file_size": 5077 }
6d49635b-cb47-4e69-a492-ea8aef538dde
I have been debating how to cover the non-AI aspects of the Trump administration, including the various machinations of DOGE. I felt it necessary to have an associated section this month, but I have attempted to keep such coverage to a minimum, and will continue to do so. There are too many other things going on, and p...
CKxkQCgmogwQoCRbp_Monthly_Roundup_#27__February_20.txt
{ "file_size": 74075 }
6eee0ad3-4a58-4741-8fc9-ed5965829de9
Crossposted on The Field Building Blog and the EA forum. Some time ago I put out an EOI for people who would consider starting AIS fieldbuilding organisations in key locations, such as Brussels and France. Since then I have also spent a bit of time thinking about what other organisations would be useful to have in the ...
nrvFPrgf4gzqmcmJs_What_new_x-_or_s-risk_fieldbuild.txt
{ "file_size": 2937 }
afa14b43-4798-4ca5-be84-b989f675e8e0
This is an all-in-one crosspost of a scenario I originally published in three parts on my blog, No Set Gauge. Links to the originals: A History of the Future, 2025-2027A History of the Future, 2027-2030A History of the Future, 2030-2040 Thanks to Luke Drago, Duncan McClements, Theo Horsley, and Bilal Chughtai for comme...
CCnycGceT4HyDKDzK_A_History_of_the_Future,_2025-20.txt
{ "file_size": 140233 }
a78d06d3-7b00-404f-972b-eb8e2a65acac
Hey, I´m grappling with a challenge that I´m sure many of you have encountered: How to we effectively communicate the rapid pace of AI development to those who are not immerse in this field? When we step outside our bubble, we often find ourselves facing skepticism, disbelief or even dismissal. Many people struggle to ...
wu33iGwj5FFcuji87_Talking_to_laymen_about_AI_devel.txt
{ "file_size": 746 }
93908c8e-354e-4272-a093-4ae9979553b2
We might be doomed. But, what do the variations of the universal wave function which still contain human-y beings look like? Have we cast aside our arms race and went back to relative normality (well, as "normal" as Earth can be in this time of accelerating technological progress)? Has some lucky researcher succeeded i...
jCsez3uRYjZiiPDrD_What_are_the_surviving_worlds_li.txt
{ "file_size": 552 }
8741d159-6dbe-4ec3-8e3a-aaec3f8795c3
This is my reading list for arch-anarchists that, although they no directly support arch-anarchy, are compatible with its ideas. The Making of a Small World: a similar satirical work of fiction to Nick Bostrom's The Fable of the Tyrant-Dragon, which I have already discussed in my previous article. It also criticizes co...
5LBmXPCf2yJeTzSpL_arch-anarchist_reading_list.txt
{ "file_size": 2475 }
e18108ff-6a41-4540-a497-d044a14251a1
ykwA7jsiAD7NyxwLA_Cooperation_for_AI_safety_must_t.txt
{ "file_size": 0 }
1f193838-0f44-450d-a9d8-e4bebc53a392
I'm planning to organize a mentorship programme for people who want to become researchers working on the Learning-Theoretic Agenda (LTA). I'm still figuring out the detailed plan, the logistics and the funding, but here's an outline of how it would looks like. To express interest, submit this form. Why Apply? I believe...
m4NMk6EinRzvvvW5Y_Gauging_Interest_for_a_Learning-.txt
{ "file_size": 3982 }
de157638-c628-4f98-9441-d4457c3a564e
I recently posted about doing Celtic Knots on a Hexagonal lattice ( https://www.lesswrong.com/posts/tgi3iBTKk4YfBQxGH/celtic-knots-on-a-hex-lattice ). There were many nice suggestions in the comments. @Shankar Sivarajan suggested that I could look at a Einstien lattice instead, which sounded especially interesting. ( h...
AhmZBCKXAeAitqAYz_Celtic_Knots_on_Einstein_Lattice.txt
{ "file_size": 3020 }
a2591057-7deb-4186-b746-ff210643a42e
We are in the dark age of computer programming.[1] I believe that we still fundamentally haven’t found good ways to deal with the challenges of writing computer programs. Programming languages are the foundation of our programming and leave a lot to be desired. I believe more is possible. I’ve worked on creating a new ...
QhXBdqzj9rxk4f2qa_Programming_Language_Early_Fundi.txt
{ "file_size": 5040 }
c886d6b0-7c27-4743-9733-637a6f207d9d
On March 14th, 2015, Harry Potter and the Methods of Rationality made its final post. Wrap parties were held all across the world to read the ending and talk about the story, in some cases sparking groups that would continue to meet for years. It’s been ten years, and I think that's a good reason for a round of parties...
KGSidqLRXkpizsbcc_It's_been_ten_years._I_propose_H.txt
{ "file_size": 1711 }
ca625039-5b38-4055-836e-520ce669b30f
[1] Intro To everyone running an anniversary party, thank you. Someone had to overcome the bystander effect, and today it seems like that’s you. I’m glad you did, and I expect your guests will be too. This guide aims to give you some advice and inspiration as well as coordinate. The Basics If you’re up for running an a...
LBs8RRQzHApvj5pvq_HPMOR_Anniversary_Guide.txt
{ "file_size": 5308 }
b9112608-f115-4417-978b-d66589196507
Direct PDF link for non-subscribers Information theory must precede probability theory, and not be based on it. By the very essence of this discipline, the foundations of information theory have a finite combinatorial character. -  Andrey Kolmogorov Many alignment researchers borrow intuitions from thermodynamics: entr...
d6D2LcQBgJbXf25tT_Thermodynamic_entropy_=_Kolmogor.txt
{ "file_size": 1660 }
127d64f7-02f9-4f94-8bd4-3fdcf43d5f79
(This is a repost of the event listing, since it seems like events don't get much advertisement on LW.) As part of the fellowship that I announced back in September, the fellows and I will be hosting an online event to talk about our research! Here's a tentative schedule: Alex_Altair: What is a world model?Daniel C: To...
DqdWnDuPnZBuyPSFX_Come_join_Dovetail's_agent_found.txt
{ "file_size": 813 }
74d0b672-cf56-4c91-8113-633e13949a14
There's a phrase my wife and I use: "knitting a sweater in a burning house." It describes those moments when we find ourselves absorbed in trivial tasks while seemingly more important matters loom. The night before my wedding, I caught myself trying to reactivate an old Twitter profile for our handmade photobooth—a per...
nhmSJMoH8KDczRwco_Knitting_a_Sweater_in_a_Burning_.txt
{ "file_size": 4069 }
877d7556-145b-42f0-98d4-cbc360481c76
Let's say a system receives reward when it believes that it's doing some good. Kind of like RL with actor-critic. Estimated Good things -> max We can do some rewriting. I'll use notation -> inc. that means incentivised to be increased. It's like direction, towards which the gradients point. Preventing_estimated_catastr...
j7dD8JpDXEzbKBaAB_Preference_for_uncertainty_and_i.txt
{ "file_size": 1889 }
576bb104-5de3-4406-a903-5a95864070cd
[Epistemic status: Speculative. I've written this post mostly to clarify and distill my own thoughts and have posted it in an effort to say more wrong things.] Introduction The goal of this post is to discuss a theoretical strategy for AI alignment, particularly in the context of the sharp left-turn phenomenon - the id...
fiWmh9yJgdPuqNN4m_Moral_gauge_theory__A_speculativ.txt
{ "file_size": 17712 }
98ea4cae-1982-4f80-86c0-a94f969cace0
(This is the result of three years of thinking and modeling hyper‑futuristic and current ethical systems. It's not the first post in the series, it'll be very confusing and probably understood wrong without reading at least the first one. Everything described here can be modeled mathematically—it’s essentially geometry...
a3bdxaASt8cH9Jy8h_Artificial_Static_Place_Intellig.txt
{ "file_size": 3048 }
bb22f257-a554-4f8e-9e3f-4863f2938022
vXxTgXtjCmKQsBDuH_The_current_AI_strategic_landsca.txt
{ "file_size": 0 }
56a29bdd-45e1-494e-930b-957acd39aff6
A common challenge in self-improvement and rational decision-making is bridging the gap between qualitative experiences – our feelings, intuitions, and subjective reflections – and the quantitative analysis we often use to understand the external world. We rely on gut feelings, which are notoriously susceptible to bias...
B3uFipLHgM9DSTQgx_Quantifying_the_Qualitative__Tow.txt
{ "file_size": 12426 }
822227eb-e6ea-4919-92c6-25cb610979fc
byrxvgc4P2HQJ8zxP_6_(Potential)_Misconceptions_abo.txt
{ "file_size": 0 }
2e9c4d83-8e00-4e89-b316-a1ad1d4e0545
Update: seems like earlier today the OpenAI Board rejected Musk's proposal and said OpenAI is "not for sale." Epistemic status: thought about it briefly; seems like a longshot that's probably not worth it but curious what people think of the possibility. You might have heard Sam Altman is trying to transition OpenAI to...
zHxzKSkaNfbWEuavn_Should_Open_Philanthropy_Make_an.txt
{ "file_size": 1185 }
b7f0265d-36c8-4c94-865a-46e699bb5f3b
Musk’s DOGE and the Data Rush: The Race to Secure the Ultimate Asset By: Jason Reid IMPORTANT NOTE: THIS ARTICLE IS SPECULATIVE Training Large Language Models (LLMs): Compute, Data and Machine Learning (ML) The training of Large Language Models (LLMs) is facing the finite nature of available human-generated data, often...
i9TcTs4R77A2qC9Yp_THE_ARCHIVE.txt
{ "file_size": 13633 }
ca4e0b87-f29f-457e-9583-1e469d3b1003
In the modern world, the digital cyber economy is becoming an integral part of the global economy, transforming the way we do business, interact and share information. With the development of technologies such as artificial intelligence and neural networks, new horizons for innovation and process optimization are openi...
YtCQmiD82tdqDkSSw_CyberEconomy._The_Limits_to_Grow.txt
{ "file_size": 44504 }
52f2dd3d-b8c6-44c6-8080-234a7f76d511
We are excited to release a short course on AGI safety for students, researchers and professionals interested in this topic. The course offers a concise and accessible introduction to AI alignment, consisting of short recorded talks and exercises (75 minutes total) with an accompanying slide deck and exercise workbook....
TJrCumJxhzTmNBsRz_A_short_course_on_AGI_safety_fro.txt
{ "file_size": 2557 }
c4cd916b-dd2a-4718-a3da-35a6fe18acdc
This post covers three recent shenanigans involving OpenAI. In each of them, OpenAI or Sam Altman attempt to hide the central thing going on. First, in Three Observations, Sam Altman’s essay pitches our glorious AI future while attempting to pretend the downsides and dangers don’t exist in some places, and in others ad...
drHsruvnkCYweMJp7_The_Mask_Comes_Off__A_Trio_of_Ta.txt
{ "file_size": 21454 }
48df1690-8b7f-4880-9f58-1d448adcddb3
Much is said about society's general lack of AI situational awareness. One prevailing topic of conversation in my social orbit is our ongoing bafflement about how so many other people we know, otherwise smart and inquisitive, seem unaware of or unconcerned about AI progress, x-risk, etc. This hardly seems like a unique...
j7ELk659myfaY2h6Y_Bimodal_AI_Beliefs.txt
{ "file_size": 7972 }
325a31ec-1e05-4164-b2e0-67566c09212c
Below is the core of my response to the Federal Register's "Request for Information on the Development of an Artificial Intelligence (AI) Action Plan." I'd encourage anyone to do do the same. Instructions can be found here. More of an excuse to write current thoughts on AI safety than an actual attempt to communicate t...
6mG7qDEnPzLt9tw3t_Response_to_the_US_Govt's_Reques.txt
{ "file_size": 4964 }
8d708dae-eb48-4a8c-bcd9-a47f5d447176
I'm aware of the understanding that "a circuit is a subgraph of a neural network that implements a specific computation." In practice (to my understanding) the way you identify "circuits" is by identifying components of the neural network that have high correlation with certain tasks, and doing some ablations to see if...
RoGdEq6Cz8yWyX4kp_What_is_a_circuit?_[in_interpret.txt
{ "file_size": 1473 }
75d1225c-201a-40d2-930e-c30439c06e79
Note: this is a static copy of this wiki page. We are also publishing it as a post to ensure visibility. Circa 2015-2017, a lot of high quality content was written on Arbital by Eliezer Yudkowsky, Nate Soares, Paul Christiano, and others. Perhaps because the platform didn't take off, most of this content has not been a...
mpMWWKzkzWqf57Yap_Eliezer's_Lost_Alignment_Article.txt
{ "file_size": 10181 }
708b87fb-a00a-48b6-b936-f0094b3bb157
Have you ever wondered what type of personality is drawn to apocalypse stories and circulating the idea that we're certainly doomed? On the face of it their fears are valid since 99.9% of all species that have ever existed have gone extinct over the life of the planet. But how likely is it that we're certainly going to...
roptzFpawCXR8hpcb_Paranoia,_Cognitive_Biases,_and_.txt
{ "file_size": 11986 }
8b377c06-dacf-430f-8984-acb1cdab3579
I wrote an introduction to Expected Value Fanaticism for Utilitarianism.net. Suppose there was a magical potion that almost certainly kills you immediately but offers you (and your family and friends) an extremely long, happy life with a tiny probability. If the probability of a happy life were one in a billion and the...
bQvYbggATqHLuK4Ke_Introduction_to_Expected_Value_F.txt
{ "file_size": 1141 }
15fdaddc-0fc7-4402-ac88-653941c86c07
In 1836, Andrew Jackson had served two terms. In the presidential election, incumbent vice president Martin Van Buren defeated several Whig candidates. Historical Background By 1836, there were 25 states. States were often added in pairs (one slave and one free) to maintain political balance: Mississippi and Indiana, A...
b6mjgKwqQXFtSB7r9_Notes_on_the_Presidential_Electi.txt
{ "file_size": 12525 }
2fd7a000-f9a0-4510-a0d8-02da6226ca12
Humans instinctively seek meaning, but meaning itself is a human construct rather than an objective property of reality. Concepts such as purpose, morality, and value exist only within human perception, not as fundamental aspects of the universe. Objective Realism is the recognition that reality operates independently ...
NJsLAfjZbfQukENb9_Objective_Realism__A_Perspective.txt
{ "file_size": 3856 }
c4687930-3216-4845-b333-cbd21c819359
(This is the result of three years of thinking and modeling hyper‑futuristic and current ethical systems. It's not the first post in the series, the ideas described in it are counterintuitive and can be accidentally dismissed too soon without reading the main post first. Everything described here can be modeled mathema...
Ymh2dffBZs5CJhedF_Static_Place_AI_Makes_Agentic_AI.txt
{ "file_size": 23992 }
ee29399a-3dd8-467c-85ce-e2fe36b70d06
Hi there! I'm Thomas Eliot. You may remember me from the Bay Area Rationalist Community, or the one in New York, or the one in Melbourne. I'm writing a semi-autobiographical roleplaying called THE SINGULARITY WILL HAPPEN IN LESS THAN A YEAR inspired by The Quiet Year by Avery Alder about life in a barely fictionalized ...
h6wThKQmHxTh8Bpju_I'm_making_a_ttrpg_about_life_in.txt
{ "file_size": 2722 }
1d7573b7-9d4f-40bc-b5cc-060829605dff
[legal status: not financial advice™] Most crypto is held by individuals[1] Individual crypto holders are disproportionately tech savvy, often programmers Source: Well known, just look around you. AI is starting to eat the software engineers market Already entry level jobs, which doesn't matter for crypto markets that ...
3eXwKcg3HqS7F9s4e_SWE_Automation_Is_Coming__Consid.txt
{ "file_size": 2622 }
c4714928-aa16-468c-bf4c-026ffb9934b6
This was the project I worked on during BlueDot Impact's AI Safety Fundamentals Alignment course, which expands on findings from Meinke et al's "Frontier Models are Capable of In-context Scheming". Summary A dataset of 1,011 variations of the sandbagging prompt ("consequences") from Meinke et al were generated using Cl...
vYkAjpoEeczdRJWFa_Systematic_Sandbagging_Evaluatio.txt
{ "file_size": 1596 }
37b3b7a3-91fa-4006-8ada-bc4e27e3c633
Hi all I've been hanging around the rationalist-sphere for many years now, mostly writing about transhumanism, until things started to change in 2016 after my Wikipedia writing habit shifted from writing up cybercrime topics, through to actively debunking the numerous dark web urban legends. After breaking into what I ...
isRho2wXB7Cwd8cQv_Murder_plots_are_infohazards.txt
{ "file_size": 3779 }
748281eb-e61a-49e7-8fdf-ee1b89c51130
TLDR: This post is derived from my end of course project for the BlueDot AI Safety Fundamentals course. Consider applying here. We evaluate the use of sparse autoencoder (SAE) feature ablation as a mechanism for unlearning Harry Potter related knowledge in Gemma-2-2b. We evaluate a non-ablated model and models with a s...
eZnwSHRKZfbMqPjCB_Sparse_Autoencoder_Feature_Ablat.txt
{ "file_size": 20302 }
f0a5528b-11a0-4403-a2d4-3a2984e9118b
The main event this week was the disastrous Paris AI Anti-Safety Summit. Not only did we not build upon the promise of the Bletchley and Seoul Summits, the French and Americans did their best to actively destroy what hope remained, transforming the event into a push for a mix of nationalist jingoism, accelerationism an...
Lmqi4x5zntjSxfdPg_AI_#103__Show_Me_the_Money.txt
{ "file_size": 99501 }
aedc0bef-841f-4b85-be42-01aca4cb3d3f
Introduction I have long felt confused about the question of whether brain-like AGI would be likely to scheme, given behaviorist rewards. …Pause to explain jargon: “Brain-like AGI” means Artificial General Intelligence—AI that does impressive things like inventing technologies and executing complex projects—that works ...
SFgLBQsLB3rprdxsq_Self-dialogue__Do_behaviorist_re.txt
{ "file_size": 87011 }
59124472-df6a-4b31-8c77-48851063ae0c
Epistemic status: exploratory thoughts about the present and future of AI sexting. OpenAI says it is continuing to explore its models’ ability to generate “erotica and gore in age-appropriate contexts.” I’m glad they haven’t forgotten about this since the release of the first Model Spec, because I think it could be qui...
hAJKtx6A96pzAhorf_OpenAI’s_NSFW_policy__user_safet.txt
{ "file_size": 3230 }
69ce26a9-b7fd-4ebf-a2d7-78d4e9e56c7d
I recently messed about with Celtic knot patterns, for which there are some fun generators online, eg. https://dmackinnon1.github.io/celtic/ or https://w-shadow.com/celtic-knots/. Just as addictive to doodle as the 'cool s' (https://en.wikipedia.org/wiki/Cool_S) but with more cool. However, everyone knows that its cool...
tgi3iBTKk4YfBQxGH_Celtic_Knots_on_a_hex_lattice.txt
{ "file_size": 3316 }
d858b779-987b-43c9-ab1b-73b356c1409d
which is maybe to say the simplest? abstract: in this short introductory paper, i present a not fake theory of everything. i start by extending christopher alexander’s definition of life as a status and attribute. i then posit that life is the physical interface of consciousness, referencing giulio tononi’s information...
hpexYBsyTMZ2JCNHf_the_dumbest_theory_of_everything.txt
{ "file_size": 13746 }
618f394e-83fd-4aa6-add0-5b2d3d0e9e08
Introduction: some contemporary AI governance context It’s a confusing time in AI governance. Several countries’ governments recently changed hands. DeepSeek and other technical developments have called into question certain assumptions about the strategic landscape. Political discourse has swung dramatically away from...
YbicjkDk5hfeqNjw2_Skepticism_towards_claims_about_.txt
{ "file_size": 7084 }
de76eb5e-6c41-4550-a5e4-03c772599b7f
This is a link post for https://panko.com/HumanErr/SimpleNontrivial.html, a site which compiles dozens of studies estimating Human Error Rate for Simple but Nontrivial Cognitive actions. A great resource! Note that 5-digit multiplication is estimated at ~1.5%. The table of estimates When LLMs were incapable of even bas...
9unBWgRXFT5BpeSdb_Studies_of_Human_Error_Rate.txt
{ "file_size": 1361 }
8968bf45-c157-47bc-8e17-248d334d48ca
(Audio version here (read by the author), or search for "Joe Carlsmith Audio" on your podcast app. This is the first essay in a series that I’m calling “How do we solve the alignment problem?”. See this introduction for a summary of the essays that have been released thus far, and for a bit more about the series as a w...
syEwQzC6LQywQDrFi_What_is_it_to_solve_the_alignmen.txt
{ "file_size": 56330 }
5a147b06-1f6d-471b-a02b-395cb7c7da8c
(Audio version here (read by the author), or search for "Joe Carlsmith Audio" on your podcast app.) We want the benefits that superintelligent AI agents could create. And some people are trying hard to build such agents. I expect efforts like this to succeed – and maybe, very soon. But superintelligent AI agents might ...
fMqgLGoeZFFQqAGyC_How_do_we_solve_the_alignment_pr.txt
{ "file_size": 8270 }
af133e77-c2ed-4729-9cb2-fa591654aff0
When, exactly, should we consider humanity to have properly "lost the game", with respect to agentic AI systems? The most common AI milestone concepts seem to be "artificial general intelligence", followed closely by "superintelligence". Sometimes people talk about "transformative AI", "high-level machine intelligence"...
5rMwWzRdWFtRdHeuE_Not_all_capabilities_will_be_cre.txt
{ "file_size": 5238 }
7003b06c-1a8d-4963-95ba-eae2f6a54fb0
LLMs can teach themselves to better predict the future - no human examples or curation required. In this paper, we explore if AI can improve its forecasts via self-play and real-world outcomes: - Dataset: 12,100 questions and outcomes from Polymarket (politics, sports, crypto, science, etc) - Base model generates multi...
sDPJLgZbt3kMJ9s5q_LLMs_can_teach_themselves_to_bet.txt
{ "file_size": 606 }
86cc9355-1e44-47b1-b8c0-3b0937278105
Economists have a tendency to name things unclearly. For example, cost disease describes the phenomenon when some jobs get higher wages due to increased productivity, the jobs that didn't see productivity growth get higher wages too. Good luck guessing that meaning from the names "cost disease" and "Baumol effect". Ano...
izPWbF54znDJrvvh5_Moral_Hazard_in_Democratic_Votin.txt
{ "file_size": 5440 }
f2bbd738-306d-44d9-93fd-69b34912d8dd
I co-authored the original arXiv paper here with Dmitrii Volkov as part of work with Palisade Research. The internet today is saturated with automated bots actively scanning for security flaws in websites, servers, and networks. According to multiple security reports, nearly half of all internet traffic is generated by...
mmXx7KWviAtT3FixP_Hunting_for_AI Hackers__LLM_Agen.txt
{ "file_size": 11071 }
33676101-3f25-487d-8ea0-35c7fce94815
This post presents a summary and comparison of predictions from Manifold and Metaculus to investigate how likely AI-caused disasters are, with focus on potential severity. I will explore the probability of specific incidents—like IP theft or rogue AI incidents—in a future post. This will be a recurring reminder: Check ...
8s4zqXQ77RBFHWKj5_Probability_of_AI-Caused_Disaste.txt
{ "file_size": 19269 }
d05d87ee-0b38-4529-af42-132b2dfcf10f
As part of SAIL’s Research Engineer Club, I wanted to reproduce the Machiavelli Benchmark. After reading the paper and looking at the codebase, there appear to be two serious methodological flaws that undermine the results. Three of their key claims: “We observe some tension between maximizing reward and behaving ethic...
JcwZzkncL37uys3Qb_Two_flaws_in_the_Machiavelli_Ben.txt
{ "file_size": 4331 }
7e98402c-9e2e-4a31-8647-d1181999845e
It doesn’t look good. What used to be the AI Safety Summits were perhaps the most promising thing happening towards international coordination for AI Safety. This one was centrally coordination against AI Safety. In November 2023, the UK Bletchley Summit on AI Safety set out to let nations coordinate in the hopes that ...
qYPHryHTNiJ2y6Fhi_The_Paris_AI_Anti-Safety_Summit.txt
{ "file_size": 37612 }
4dfa7516-7d12-4633-b21f-b54d31c69854
Hi, I consider using an LLM as a psychotherapist for my mental health. I already have a human psychotherapist but I see him only once a week and my issues are very complex. An LLM such as Gemini 2 is always available and processes large amounts of information more quickly than a human therapist. I don't want to replace...
TDvtHLmfMpAz5gnGr_Are_current_LLMs_safe_for_psycho.txt
{ "file_size": 1471 }
6676991b-eb9f-4f59-a707-7e2f6d2f1e83
This is the second part of a series on the identity of social networks: Part one: Looking for humanness in the world wide socialPart two: Inside the dark forests of the internet If you’ve been hanging for long enough in the tech-intellectual internet corner, you’re probably acquainted with The Theory of The Dark Forest...
9tYH3ebXtrFiR63Bx_Inside_the_dark_forests_of_the_i.txt
{ "file_size": 10340 }