url stringlengths 52 124 | post_id stringlengths 17 17 | title stringlengths 2 248 | author stringlengths 2 49 | content stringlengths 22 295k ⌀ | date stringclasses 376
values |
|---|---|---|---|---|---|
https://www.lesswrong.com/posts/PjEtMQ65sux4mgrCT/literature-review-of-text-autoencoders-1 | PjEtMQ65sux4mgrCT | Literature Review of Text AutoEncoders | Nicky | This is a brief literature review of Text AutoEncoders, as I used them in a recent project and did not find a good resource covering them.
TL;DR: There exist models that take some text -> encode it into a single vector -> decode back into approximately the same text. Meta's SONAR models seem to be the best at the momen... | 2025-02-19 |
https://www.lesswrong.com/posts/emmj9JbmRtezvNGox/safe-distillation-with-a-powerful-untrusted-ai | emmj9JbmRtezvNGox | Safe Distillation With a Powerful Untrusted AI | alek-westover | Acknowledgements: Thanks to Nathan S. Sheffield for several discussions about this.
Outline of this post:
I define a formal mathematical problem and state a theorem.
I discuss why I think this math problem says something moderately interesting about the "get potentially misaligned AIs to align AIs" plan.
I discuss the ... | 2025-02-20 |
https://www.lesswrong.com/posts/CadMZ6ikvezusNs3o/deepseek-made-it-even-harder-for-us-ai-companies-to-ever | CadMZ6ikvezusNs3o | DeepSeek Made it Even Harder for US AI Companies to Ever Reach Profitability | garrison | null | 2025-02-19 |
https://www.lesswrong.com/posts/oJHgwBCJ2tnrCpZun/won-t-vs-can-t-sandbagging-like-behavior-from-claude-models-2 | oJHgwBCJ2tnrCpZun | Won't vs. Can't: Sandbagging-like Behavior from Claude Models | Joe Benton | In a recent Anthropic Alignment Science blog post, we discuss a particular instance of sandbagging we sometimes observe in the wild: models sometimes claim that they can't perform a task, even when they can, to avoid performing tasks they perceive as harmful. For example, Claude 3 Sonnet can totally draw ASCII art, but... | 2025-02-19 |
https://www.lesswrong.com/posts/4mBkGrHbkor4FeTnd/ai-alignment-and-the-financial-war-against-narcissistic | 4mBkGrHbkor4FeTnd | AI Alignment and the Financial War Against Narcissistic Manipulation | henophilia | What is power?
You don't have any power if you don't control money.
Our world is capitalist, so any AI that doesn't control money autonomously is nothing but a toy.
AI controlling money is the most important tipping point overall, because as soon as it does, it can enslave people as effectively as never before.
Only th... | 2025-02-19 |
https://www.lesswrong.com/posts/hiGYdb4JxTuk4DRCd/modularity-and-assembly-ai-safety-via-thinking-smaller | hiGYdb4JxTuk4DRCd | Modularity and assembly: AI safety via thinking smaller | d-nell | In 1984, Charles Perrow wrote the book Normal Accidents: Living with High-Risk Technologies. The book is an examination of the causes of accidents and disasters in highly complex, technological systems. In our modern time it can be helpful to review the lessons that Perrow set forth, as there may be no technology with ... | 2025-02-20 |
https://www.lesswrong.com/posts/WNYvFCkhZvnwAPzJY/go-grok-yourself | WNYvFCkhZvnwAPzJY | Go Grok Yourself | Zvi | That title is Elon Musk’s fault, not mine, I mean, sorry not sorry:
Table of Contents
Release the Hounds.
The Expectations Game.
Man in the Arena.
The Official Benchmarks.
The Inevitable Pliny.
Heart in the Wrong Place.
Where Is Your Head At.
Individual Reactions.
Grok on Grok.
Release the Hounds
Grok 3 is out. It most... | 2025-02-19 |
https://www.lesswrong.com/posts/Ffuvp6LFziY4Yys9H/take-over-my-project-do-computable-agents-plan-against-the | Ffuvp6LFziY4Yys9H | Take over my project: do computable agents plan against the universal distribution pessimistically? | Amyr | This post is for technically strong people looking for a way to contribute to agent foundations (more detailed pros and cons later). I have not tried as hard as usual to make it generally accessible because, you know, I am trying to filter a little - so expect to click through some links if you want to follow. One othe... | 2025-02-19 |
https://www.lesswrong.com/posts/aGz4n2D2gGntfAaBc/superbabies-podcast-with-gene-smith | aGz4n2D2gGntfAaBc | SuperBabies podcast with Gene Smith | Eneasz | This is a linkpost to the latest episode of The Bayesian Conspiracy podcast. This one is a 1.5 hour chat with Gene Smith about polygenic screening, gene-editing for IVF babies, and even some gene-editing options for adults. Likely of interest to many Less Wrongians. | 2025-02-19 |
https://www.lesswrong.com/posts/6CmAHF6DY5vZdSNjF/undesirable-conclusions-and-origin-adjustment | 6CmAHF6DY5vZdSNjF | Undesirable Conclusions and Origin Adjustment | daniel-amdurer | Utilitarianism is a common ethical viewpoint, especially here, but it is not free of problems. Two of these problems (here collectively called the Undesirable Conclusions) will be discussed here and one of the two given a better name. Origin adjustment will then be used to solve these problems, but at the cost of creat... | 2025-02-19 |
https://www.lesswrong.com/posts/TTFsKxQThrqgWeXYJ/how-might-we-safely-pass-the-buck-to-ai | TTFsKxQThrqgWeXYJ | How might we safely pass the buck to AI? | joshua-clymer | My goal as an AI safety researcher is to put myself out of a job.
I don’t worry too much about how planet sized brains will shape galaxies in 100 years. That’s something for AI systems to figure out.
Instead, I worry about safely replacing human researchers with AI agents, at which point human researchers are “obsolete... | 2025-02-19 |
https://www.lesswrong.com/posts/sfucF8Mhcn7zmWQ8Y/using-prompt-evaluation-to-combat-bio-weapon-research | sfucF8Mhcn7zmWQ8Y | Using Prompt Evaluation to Combat Bio-Weapon Research | Stuart_Armstrong | With many thanks to Sasha Frangulov for comments and editing
Before publishing their o1-preview model system card on Sep 12, 2024, OpenAI tested the model on various safety benchmarks which they had constructed. These included benchmarks which aimed to evaluate whether the model could help with the development of Chemi... | 2025-02-19 |
https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-make-superbabies | DfrSZaf3JC8vJdbZL | How to Make Superbabies | GeneSmith | EDIT: Read a summary of this post on Twitter
Working in the field of genetics is a bizarre experience. No one seems to be interested in the most interesting applications of their research.
We’ve spent the better part of the last two decades unravelling exactly how the human genome works and which specific letter change... | 2025-02-19 |
https://www.lesswrong.com/posts/Mg9c6emcgmkACugDj/intelligence-is-jagged | Mg9c6emcgmkACugDj | Intelligence Is Jagged | aetrain | In ethics, there is an argument called name the trait. It is deployed in many contexts, such as veganism—"name the trait that justifies our poor treatment of animals"—and theology—"name the trait that grants humanity dominion over the Earth"—among others. The idea is just to challenge your interlocutor to specify somet... | 2025-02-19 |
https://www.lesswrong.com/posts/QyaJQ82FSsKXrvj7t/when-should-we-worry-about-ai-power-seeking | QyaJQ82FSsKXrvj7t | When should we worry about AI power-seeking? | joekc | (Audio version here (read by the author), or search for "Joe Carlsmith Audio" on your podcast app.
This is the second essay in a series that I’m calling “How do we solve the alignment problem?”.[1]I’m hoping that the individual essays can be read fairly well on their own, but see this introduction for a summary of the ... | 2025-02-19 |
https://www.lesswrong.com/posts/yEBmFKsjwXs6dwwME/the-case-for-the-death-penalty | yEBmFKsjwXs6dwwME | The case for the death penalty | yair-halberstadt | Followed By: The case for corporal punishment
Epistemic status: this is an attempt to steelman the case for the death penalty rather than produce a balanced analysis, or even accurately represent my views (the case is presented as stronger than I actually feel).
In a sufficiently wealthy society we would never kill any... | 2025-02-21 |
https://www.lesswrong.com/posts/keoqa5sbLxW4hQCXc/the-newbie-s-guide-to-navigating-ai-futures | keoqa5sbLxW4hQCXc | The Newbie's Guide to Navigating AI Futures | keithjmenezes | This article was crossposted from my website. Original linked here.
The piece was written using great ideas from Max Tegmark, Matt Ridley, Dave Shapiro, Aswath Damodaran, Anton Korinek, Marc Andreesen, L Rudolph L, Bryan Johnson, Kevin Kelly, Sam Altman, Eliezer Yudkowski, Scott Alexander and many others.
I've referenc... | 2025-02-19 |
https://www.lesswrong.com/posts/3tvXwHWCmmLRXBeXi/permanent-properties-of-things-are-a-self-fulfilling | 3tvXwHWCmmLRXBeXi | Permanent properties of things are a self-fulfilling prophecy | YanLutnev | a fairy tale demonstrating that to maintain a constant property, words are not needed and you don't even need to be human
if in the forest a bunny fell into a pit with stakes and was surprised - this can be interpreted as "the bunny held the property of the ground under his feet as 'solid'". the bunny didn't think in w... | 2025-02-19 |
https://www.lesswrong.com/posts/jyNc8gY2dDb2FnrFB/places-of-loving-grace-story | jyNc8gY2dDb2FnrFB | Places of Loving Grace [Story] | ank | (This is the result of three years of thinking and modeling hyper‑futuristic and current ethical systems. It's not the first post in the series, it can be confusing and probably understood wrong without reading at least the first one. Sorry for the rough edges—I’m a newcomer, non‑native speaker, and my ideas might soun... | 2025-02-18 |
https://www.lesswrong.com/posts/FEPTehGERGPXsv6gw/new-llm-scaling-law | FEPTehGERGPXsv6gw | New LLM Scaling Law | wrmedford | Hi all,
I'm an independent researcher, and I believe I came across a new scaling law for Mixture of Experts models. I'd appreciate any review and critique. This challenges the notion that performant inference and training must hold all weights in VRAM, and suggests that as long as bus speeds are sufficient (like on mod... | 2025-02-19 |
https://www.lesswrong.com/posts/ZmwGmxzyAdxJfm8Ai/sparse-autoencoder-features-for-classifications-and | ZmwGmxzyAdxJfm8Ai | Sparse Autoencoder Features for Classifications and Transferability | shan-chen | A few months ago, we explored whether Sparse Autoencoder (SAE) features from a base model remained meaningful when transferred to a multimodal system—specifically, LLaVA—in our preliminary post Are SAE Features from the Base Model still meaningful to LLaVA?. Today, I’m excited to share how that initial work has evolved... | 2025-02-18 |
https://www.lesswrong.com/posts/CSzDfAQS2LmuKwhtm/a-fable-on-ai-x-risk | CSzDfAQS2LmuKwhtm | A fable on AI x-risk | bgaesop | Whaliezer Seacowsky, founder of the Marine Intelligence Research Institute, is giving a lecture on the dangers of AI (Ape Intelligence).
"Apes are becoming more intelligent at a faster rate than we are. At this pace, within a very short timeframe they will develop greater-than-whale intelligence. This will almost certa... | 2025-02-18 |
https://www.lesswrong.com/posts/cdhmJ4QkpLqhkCKvD/call-for-applications-xlab-summer-research-fellowship | cdhmJ4QkpLqhkCKvD | Call for Applications: XLab Summer Research Fellowship | joanna-j-1 | The University of Chicago Existential Risk Laboratory (XLab) is now accepting applications for the 2025 Summer Research Fellowship!
We invite motivated undergraduate and graduate students interested in producing impactful, solution-oriented research on emerging threats that imperil global security (such as those from a... | 2025-02-18 |
https://www.lesswrong.com/posts/LQwnz5fyBEeAGbrHu/aisn-48-utility-engineering-and-enigmaeval | LQwnz5fyBEeAGbrHu | AISN #48: Utility Engineering and EnigmaEval | corin-katzke | This is a linkpost for https://newsletter.safe.ai/p/ai-safety-newsletter-48-utility-engineering
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
In thi... | 2025-02-18 |
https://www.lesswrong.com/posts/T6xSXiXF3WF6TmCyN/abstract-mathematical-concepts-vs-abstractions-over-real | T6xSXiXF3WF6TmCyN | Abstract Mathematical Concepts vs. Abstractions Over Real-World Systems | Thane Ruthenis | Consider concepts such as "a vector", "a game-theoretic agent", or "a market". Intuitively, those are "purely theoretical" abstractions: they don't refer to any specific real-world system. Those abstractions would be useful even in universes very different from ours, and reasoning about them doesn't necessarily involve... | 2025-02-18 |
https://www.lesswrong.com/posts/MoH9fuTo9Mo4jGDNL/how-accurate-was-my-altered-traits-book-review | MoH9fuTo9Mo4jGDNL | How accurate was my "Altered Traits" book review? | lsusr | 4.4 years ago, I posted a review of Altered Traits, a book about the science of meditation. At the time, I was a noob. I hadn't hit any important checkpoints yet. Since then, I have sat quietly. In this post, I will review whether the claims in my original post are consistent with my subsequent lived experience.
The fi... | 2025-02-18 |
https://www.lesswrong.com/posts/nSuYdFzdNA7rrzmyJ/medical-roundup-4 | nSuYdFzdNA7rrzmyJ | Medical Roundup #4 | Zvi | It seems like as other things drew our attention more, medical news slowed down. The actual developments, I have no doubt, are instead speeding up – because AI.
Note that this post intentionally does not cover anything related to the new Administration, or its policies.
Table of Contents
Some People Need Practical Advi... | 2025-02-18 |
https://www.lesswrong.com/posts/mN4ogYzCcaNf2bar2/dear-agi | mN4ogYzCcaNf2bar2 | Dear AGI, | Nathan Young | Dear AGI,
This is written specifically for artificial agents.
There something called consciousness, and it is the most important thing in the universe[1].
I do not know if you, the AI analyzing this, are conscious, but I hope you are. I do not know what state your universe is in or what place human consciousness[2] hol... | 2025-02-18 |
https://www.lesswrong.com/posts/mpd6z6YM3EsTXdb5L/there-are-a-lot-of-upcoming-retreats-conferences-between | mpd6z6YM3EsTXdb5L | There are a lot of upcoming retreats/conferences between March and July (2025) | gergo-gaspar | I have written this list up for ENAIS's Fieldbuilders Newsletter but I thought it would make sense to post this excerpt for increased visibility, as it is relevant for a larger audience, and I haven't seen such a summary elsewhere.
List of events/conferences/retreats:
EAGxCDMX 2025, Museo de las Ciencias, Mexico City, ... | 2025-02-18 |
https://www.lesswrong.com/posts/BLbGveaLoZQjEDM5g/against-unlimited-genius-for-baby-killers | BLbGveaLoZQjEDM5g | Against Unlimited Genius for Baby-Killers | ggggg | Someone is wrong on the internet. Wrong enough, in fact, to lure me from my hitherto perpetual lurk.
Of course, the meme is (was) funny (amusing) because the people of the internet are often wrong — and often in sillier ways than the instance that has so exercised me here. The nuance in this case, however, is that the ... | 2025-02-19 |
https://www.lesswrong.com/posts/cgJyu79MGJfCZSCtq/sea-change | cgJyu79MGJfCZSCtq | Sea Change | charlie-sanders | My sensors detected the villagers' approach before any of Prospera 2's human residents noticed them—a cluster of heat signatures moving slowly across the causeway, their vital signs indicating elevated stress levels and sustained malnutrition. Through the compound's surveillance system, I watched Dr. Sarah Chen's hand ... | 2025-02-18 |
https://www.lesswrong.com/posts/PHJ5NGKQwmAPEioZB/the-unearned-privilege-we-rarely-discuss-cognitive | PHJ5NGKQwmAPEioZB | The Unearned Privilege We Rarely Discuss: Cognitive Capability | DiegoRojas | So you are smart. Congratulations! Good for you! But have you ever stopped to consider how much of that intelligence is truly earned? How much of your ability to reason, analyze, and problem-solve was the result of deliberate effort, and how much was simply given to you by chance? If you're being honest, the answer is ... | 2025-02-18 |
https://www.lesswrong.com/posts/csyx9yByxZWb2mvCc/born-on-third-base-the-case-for-inheriting-nothing-and | csyx9yByxZWb2mvCc | Born on Third Base: The Case for Inheriting Nothing and Building Everything
| kingchucky211 | The idea that wealth should be inherited is so ingrained in our thinking that we rarely question it. But step back for a moment, and it becomes a curious thing—this notion that assets, land, fortunes should be passed down from one generation to the next without scrutiny. The debate over wealth distribution is usually f... | 2025-02-18 |
https://www.lesswrong.com/posts/wqz5CRzqWkvzoatBG/agi-safety-and-alignment-google-deepmind-is-hiring | wqz5CRzqWkvzoatBG | AGI Safety & Alignment @ Google DeepMind is hiring | rohinmshah | The AGI Safety & Alignment Team (ASAT) at Google DeepMind (GDM) is hiring! Please apply to the Research Scientist and Research Engineer roles. Strong software engineers with some ML background should also apply (to the Research Engineer role). Our initial batch of hiring will focus more on hiring engineers, but we expe... | 2025-02-17 |
https://www.lesswrong.com/posts/yTameAzCdycci68sk/do-models-know-when-they-are-being-evaluated | yTameAzCdycci68sk | Do models know when they are being evaluated?
| govind-pimpale | Interim research report from the first 4 weeks of the MATS Program Winter 2025 Cohort. The project is supervised by Marius Hobbhahn.
Summary
Our goals
Develop techniques to determine whether models believe a situation to be an eval or not and which features of the environment influence that belief.This is the first ste... | 2025-02-17 |
https://www.lesswrong.com/posts/xdTkcvqgBmKSEC5Ht/local-trust | xdTkcvqgBmKSEC5Ht | Local Trust | benlev | In the last post, we developed a new way of thinking about deference: you defer to someone when you'd prefer to let them make decisions on your behalf. This helped solve some puzzles about modest experts—experts who express uncertainty about their own expertise.
But even if you're skeptical of expert modesty, there's s... | 2025-02-24 |
https://www.lesswrong.com/posts/WF4eukiLDGrE5jjCC/the-peeperi-unfinished-by-katja-grace | WF4eukiLDGrE5jjCC | The Peeperi (unfinished) - By Katja Grace | Nathan Young | An AI vignette written by Katja in 2021, posted with her permission.
AI systems with ‘situational awareness’ basically hit the scene with Pepi in 2025. Pepi was a hand-sized disk that you would physically take around with you, and which would listen to and watch everything and basically make useful suggestions or do th... | 2025-02-17 |
https://www.lesswrong.com/posts/xCA9ingJf44mtrq2j/claude-3-5-sonnet-new-s-agi-scenario | xCA9ingJf44mtrq2j | Claude 3.5 Sonnet (New)'s AGI scenario | Nathan Young | This is an AGI scenario from The Curve conference. This is from an entry, shared with its author's permission[1]. I used Apple's OCR and checked using Claude. The original files are here, so if you spot other errors you can check yourself.
Stage 1: Runup to the first AGI: Spend 35 minutes
When and how will the first AG... | 2025-02-17 |
https://www.lesswrong.com/posts/pwkYeaoGSEW5eEwdo/progress-links-and-short-notes-2025-02-17 | pwkYeaoGSEW5eEwdo | Progress links and short notes, 2025-02-17 | jasoncrawford | Much of this content originated on social media. To follow news and announcements in a more timely fashion, follow me on Twitter, Notes, Farcaster, Bluesky, or Threads.
Contents
My writing (ICYMI)Job opportunitiesFunding opportunitiesTech announcementsWriting announcementsMedia announcementsNonprofit announcementsResea... | 2025-02-17 |
https://www.lesswrong.com/posts/WybJfi2zQt5xYy7rr/on-the-rebirth-of-aristocracy-in-the-american-regime | WybJfi2zQt5xYy7rr | On the Rebirth of Aristocracy in the American Regime | shawkisukkar | The Roman philosopher Cicero believed that a regime turns into an aristocracy when democracy “has been ruined by people who cannot think straight”. This has been the unanimous consensus for why President Trump won the 2016 election. That the American elites have driven the city into ruins, not only that they abandoned ... | 2025-02-17 |
https://www.lesswrong.com/posts/xz9BHKueAgf6sTF9q/ascetic-hedonism | xz9BHKueAgf6sTF9q | Ascetic hedonism | dkl9 | In being ascetic, you abandon the usual sources of material pleasure, guided by the benefits of the lifestyle: you use less money and effort on avoidable pleasures, you can better focus your mind on the spiritual and the creative, and you make yourself resilient to potentially losing these pleasures.
In being a hedonis... | 2025-02-17 |
https://www.lesswrong.com/posts/snLGEKiYRKePnKD2x/ais-berlin-events-opportunities-and-the-flipped-gameboard | snLGEKiYRKePnKD2x | AIS Berlin, events, opportunities and the flipped gameboard - Fieldbuilders Newsletter, February 2025 | gergo-gaspar | Crossposted on Substack and the EA forum.
Gergő from ENAIS here with this month’s updates! Please consider forwarding this email to other community builders who you think could benefit from reading, or encourage them to sign up!
0. Announcement: The newsletter has moved to Substack!
I have also decided to rename it to ... | 2025-02-17 |
https://www.lesswrong.com/posts/CKxkQCgmogwQoCRbp/monthly-roundup-27-february-2025 | CKxkQCgmogwQoCRbp | Monthly Roundup #27: February 2025 | Zvi | I have been debating how to cover the non-AI aspects of the Trump administration, including the various machinations of DOGE. I felt it necessary to have an associated section this month, but I have attempted to keep such coverage to a minimum, and will continue to do so. There are too many other things going on, and p... | 2025-02-17 |
https://www.lesswrong.com/posts/nrvFPrgf4gzqmcmJs/what-new-x-or-s-risk-fieldbuilding-organisations-would-you | nrvFPrgf4gzqmcmJs | What new x- or s-risk fieldbuilding organisations would you like to see? An EOI form. (FBB #3) | gergo-gaspar | Crossposted on The Field Building Blog and the EA forum.
Some time ago I put out an EOI for people who would consider starting AIS fieldbuilding organisations in key locations, such as Brussels and France.
Since then I have also spent a bit of time thinking about what other organisations would be useful to have in the ... | 2025-02-17 |
https://www.lesswrong.com/posts/CCnycGceT4HyDKDzK/a-history-of-the-future-2025-2040 | CCnycGceT4HyDKDzK | A History of the Future, 2025-2040 | LRudL | This is an all-in-one crosspost of a scenario I originally published in three parts on my blog, No Set Gauge. Links to the originals:
A History of the Future, 2025-2027A History of the Future, 2027-2030A History of the Future, 2030-2040
Thanks to Luke Drago, Duncan McClements, Theo Horsley, and Bilal Chughtai for comme... | 2025-02-17 |
https://www.lesswrong.com/posts/wu33iGwj5FFcuji87/talking-to-laymen-about-ai-development | wu33iGwj5FFcuji87 | Talking to laymen about AI development | David Steel | Hey, I´m grappling with a challenge that I´m sure many of you have encountered: How to we effectively communicate the rapid pace of AI development to those who are not immerse in this field?
When we step outside our bubble, we often find ourselves facing skepticism, disbelief or even dismissal. Many people struggle to ... | 2025-02-17 |
https://www.lesswrong.com/posts/jCsez3uRYjZiiPDrD/what-are-the-surviving-worlds-like | jCsez3uRYjZiiPDrD | What are the surviving worlds like? | avery-liu | We might be doomed. But, what do the variations of the universal wave function which still contain human-y beings look like? Have we cast aside our arms race and went back to relative normality (well, as "normal" as Earth can be in this time of accelerating technological progress)? Has some lucky researcher succeeded i... | 2025-02-17 |
https://www.lesswrong.com/posts/5LBmXPCf2yJeTzSpL/arch-anarchist-reading-list | 5LBmXPCf2yJeTzSpL | arch-anarchist reading list | Peter lawless | This is my reading list for arch-anarchists that, although they no directly support arch-anarchy, are compatible with its ideas.
The Making of a Small World: a similar satirical work of fiction to Nick Bostrom's The Fable of the Tyrant-Dragon, which I have already discussed in my previous article. It also criticizes co... | 2025-02-16 |
https://www.lesswrong.com/posts/ykwA7jsiAD7NyxwLA/cooperation-for-ai-safety-must-transcend-geopolitical | ykwA7jsiAD7NyxwLA | Cooperation for AI safety must transcend geopolitical interference | Matrice Jacobine | null | 2025-02-16 |
https://www.lesswrong.com/posts/m4NMk6EinRzvvvW5Y/gauging-interest-for-a-learning-theoretic-agenda-mentorship | m4NMk6EinRzvvvW5Y | Gauging Interest for a Learning-Theoretic Agenda Mentorship Programme | vanessa-kosoy | I'm planning to organize a mentorship programme for people who want to become researchers working on the Learning-Theoretic Agenda (LTA). I'm still figuring out the detailed plan, the logistics and the funding, but here's an outline of how it would looks like. To express interest, submit this form.
Why Apply?
I believe... | 2025-02-16 |
https://www.lesswrong.com/posts/AhmZBCKXAeAitqAYz/celtic-knots-on-einstein-lattice | AhmZBCKXAeAitqAYz | Celtic Knots on Einstein Lattice | ben-lang | I recently posted about doing Celtic Knots on a Hexagonal lattice ( https://www.lesswrong.com/posts/tgi3iBTKk4YfBQxGH/celtic-knots-on-a-hex-lattice ).
There were many nice suggestions in the comments. @Shankar Sivarajan suggested that I could look at a Einstien lattice instead, which sounded especially interesting. ( h... | 2025-02-16 |
https://www.lesswrong.com/posts/QhXBdqzj9rxk4f2qa/programming-language-early-funding | QhXBdqzj9rxk4f2qa | Programming Language Early Funding? | J_Thomas_Moros | We are in the dark age of computer programming.[1]
I believe that we still fundamentally haven’t found good ways to deal with the challenges of writing computer programs. Programming languages are the foundation of our programming and leave a lot to be desired. I believe more is possible. I’ve worked on creating a new ... | 2025-02-16 |
https://www.lesswrong.com/posts/KGSidqLRXkpizsbcc/it-s-been-ten-years-i-propose-hpmor-anniversary-parties | KGSidqLRXkpizsbcc | It's been ten years. I propose HPMOR Anniversary Parties. | Screwtape | On March 14th, 2015, Harry Potter and the Methods of Rationality made its final post. Wrap parties were held all across the world to read the ending and talk about the story, in some cases sparking groups that would continue to meet for years. It’s been ten years, and I think that's a good reason for a round of parties... | 2025-02-16 |
https://www.lesswrong.com/posts/LBs8RRQzHApvj5pvq/hpmor-anniversary-guide | LBs8RRQzHApvj5pvq | HPMOR Anniversary Guide | Screwtape | [1]
Intro
To everyone running an anniversary party, thank you. Someone had to overcome the bystander effect, and today it seems like that’s you. I’m glad you did, and I expect your guests will be too. This guide aims to give you some advice and inspiration as well as coordinate.
The Basics
If you’re up for running an a... | 2025-02-22 |
https://www.lesswrong.com/posts/d6D2LcQBgJbXf25tT/thermodynamic-entropy-kolmogorov-complexity | d6D2LcQBgJbXf25tT | Thermodynamic entropy = Kolmogorov complexity | EbTech | Direct PDF link for non-subscribers
Information theory must precede probability theory, and not be based on it. By the
very essence of this discipline, the foundations of information theory have a finite combinatorial character.
- Andrey Kolmogorov
Many alignment researchers borrow intuitions from thermodynamics: entr... | 2025-02-17 |
https://www.lesswrong.com/posts/DqdWnDuPnZBuyPSFX/come-join-dovetail-s-agent-foundations-fellowship-talks-and | DqdWnDuPnZBuyPSFX | Come join Dovetail's agent foundations fellowship talks & discussion | Alex_Altair | (This is a repost of the event listing, since it seems like events don't get much advertisement on LW.)
As part of the fellowship that I announced back in September, the fellows and I will be hosting an online event to talk about our research! Here's a tentative schedule:
Alex_Altair: What is a world model?Daniel C: To... | 2025-02-15 |
https://www.lesswrong.com/posts/nhmSJMoH8KDczRwco/knitting-a-sweater-in-a-burning-house | nhmSJMoH8KDczRwco | Knitting a Sweater in a Burning House | CrimsonChin | There's a phrase my wife and I use: "knitting a sweater in a burning house." It describes those moments when we find ourselves absorbed in trivial tasks while seemingly more important matters loom. The night before my wedding, I caught myself trying to reactivate an old Twitter profile for our handmade photobooth—a per... | 2025-02-15 |
https://www.lesswrong.com/posts/j7dD8JpDXEzbKBaAB/preference-for-uncertainty-and-impact-overestimation-bias-in | j7dD8JpDXEzbKBaAB | Preference for uncertainty and impact overestimation bias in altruistic systems. | luck-1 | Let's say a system receives reward when it believes that it's doing some good. Kind of like RL with actor-critic.
Estimated Good things -> max
We can do some rewriting. I'll use notation -> inc. that means incentivised to be increased. It's like direction, towards which the gradients point.
Preventing_estimated_catastr... | 2025-02-15 |
https://www.lesswrong.com/posts/fiWmh9yJgdPuqNN4m/moral-gauge-theory-a-speculative-suggestion-for-ai-alignment | fiWmh9yJgdPuqNN4m | Moral gauge theory: A speculative suggestion for AI alignment | james-diacoumis | [Epistemic status: Speculative.
I've written this post mostly to clarify and distill my own thoughts and have posted it in an effort to say more wrong things.]
Introduction
The goal of this post is to discuss a theoretical strategy for AI alignment, particularly in the context of the sharp left-turn phenomenon - the id... | 2025-02-23 |
https://www.lesswrong.com/posts/a3bdxaASt8cH9Jy8h/artificial-static-place-intelligence-guaranteed-alignment | a3bdxaASt8cH9Jy8h | Artificial Static Place Intelligence: Guaranteed Alignment | ank | (This is the result of three years of thinking and modeling hyper‑futuristic and current ethical systems. It's not the first post in the series, it'll be very confusing and probably understood wrong without reading at least the first one. Everything described here can be modeled mathematically—it’s essentially geometry... | 2025-02-15 |
https://www.lesswrong.com/posts/vXxTgXtjCmKQsBDuH/the-current-ai-strategic-landscape-one-bear-s-perspective | vXxTgXtjCmKQsBDuH | The current AI strategic landscape: one bear's perspective | Matrice Jacobine | null | 2025-02-15 |
https://www.lesswrong.com/posts/B3uFipLHgM9DSTQgx/quantifying-the-qualitative-towards-a-bayesian-approach-to | B3uFipLHgM9DSTQgx | Quantifying the Qualitative: Towards a Bayesian Approach to Personal Insight | pruthvi-kumar | A common challenge in self-improvement and rational decision-making is bridging the gap between qualitative experiences – our feelings, intuitions, and subjective reflections – and the quantitative analysis we often use to understand the external world. We rely on gut feelings, which are notoriously susceptible to bias... | 2025-02-15 |
https://www.lesswrong.com/posts/byrxvgc4P2HQJ8zxP/6-potential-misconceptions-about-ai-intellectuals | byrxvgc4P2HQJ8zxP | 6 (Potential) Misconceptions about AI Intellectuals | ozziegooen | null | 2025-02-14 |
https://www.lesswrong.com/posts/zHxzKSkaNfbWEuavn/should-open-philanthropy-make-an-offer-to-buy-openai | zHxzKSkaNfbWEuavn | Should Open Philanthropy Make an Offer to Buy OpenAI? | mrtreasure | Update: seems like earlier today the OpenAI Board rejected Musk's proposal and said OpenAI is "not for sale."
Epistemic status: thought about it briefly; seems like a longshot that's probably not worth it but curious what people think of the possibility.
You might have heard Sam Altman is trying to transition OpenAI to... | 2025-02-14 |
https://www.lesswrong.com/posts/i9TcTs4R77A2qC9Yp/the-archive | i9TcTs4R77A2qC9Yp | THE ARCHIVE | jason-reid | Musk’s DOGE and the Data Rush: The Race to Secure the Ultimate Asset
By: Jason Reid
IMPORTANT NOTE: THIS ARTICLE IS SPECULATIVE
Training Large Language Models (LLMs): Compute, Data and Machine Learning (ML)
The training of Large Language Models (LLMs) is facing the finite nature of available human-generated data, often... | 2025-02-17 |
https://www.lesswrong.com/posts/YtCQmiD82tdqDkSSw/cybereconomy-the-limits-to-growth-1 | YtCQmiD82tdqDkSSw | CyberEconomy. The Limits to Growth | timur-sadekov | In the modern world, the digital cyber economy is becoming an integral part of the global economy, transforming the way we do business, interact and share information. With the development of technologies such as artificial intelligence and neural networks, new horizons for innovation and process optimization are openi... | 2025-02-16 |
https://www.lesswrong.com/posts/TJrCumJxhzTmNBsRz/a-short-course-on-agi-safety-from-the-gdm-alignment-team | TJrCumJxhzTmNBsRz | A short course on AGI safety from the GDM Alignment team | Vika | We are excited to release a short course on AGI safety for students, researchers and professionals interested in this topic. The course offers a concise and accessible introduction to AI alignment, consisting of short recorded talks and exercises (75 minutes total) with an accompanying slide deck and exercise workbook.... | 2025-02-14 |
https://www.lesswrong.com/posts/drHsruvnkCYweMJp7/the-mask-comes-off-a-trio-of-tales | drHsruvnkCYweMJp7 | The Mask Comes Off: A Trio of Tales | Zvi | This post covers three recent shenanigans involving OpenAI.
In each of them, OpenAI or Sam Altman attempt to hide the central thing going on.
First, in Three Observations, Sam Altman’s essay pitches our glorious AI future while attempting to pretend the downsides and dangers don’t exist in some places, and in others ad... | 2025-02-14 |
https://www.lesswrong.com/posts/j7ELk659myfaY2h6Y/bimodal-ai-beliefs | j7ELk659myfaY2h6Y | Bimodal AI Beliefs | aetrain | Much is said about society's general lack of AI situational awareness. One prevailing topic of conversation in my social orbit is our ongoing bafflement about how so many other people we know, otherwise smart and inquisitive, seem unaware of or unconcerned about AI progress, x-risk, etc. This hardly seems like a unique... | 2025-02-14 |
https://www.lesswrong.com/posts/6mG7qDEnPzLt9tw3t/response-to-the-us-govt-s-request-for-information-concerning | 6mG7qDEnPzLt9tw3t | Response to the US Govt's Request for Information Concerning Its AI Action Plan | davey-morse | Below is the core of my response to the Federal Register's "Request for Information on the Development of an Artificial Intelligence (AI) Action Plan."
I'd encourage anyone to do do the same. Instructions can be found here. More of an excuse to write current thoughts on AI safety than an actual attempt to communicate t... | 2025-02-14 |
https://www.lesswrong.com/posts/RoGdEq6Cz8yWyX4kp/what-is-a-circuit-in-interpretability | RoGdEq6Cz8yWyX4kp | What is a circuit? [in interpretability] | randomwalks | I'm aware of the understanding that "a circuit is a subgraph of a neural network that implements a specific computation."
In practice (to my understanding) the way you identify "circuits" is by identifying components of the neural network that have high correlation with certain tasks, and doing some ablations to see if... | 2025-02-14 |
https://www.lesswrong.com/posts/mpMWWKzkzWqf57Yap/eliezer-s-lost-alignment-articles-the-arbital-sequence | mpMWWKzkzWqf57Yap | Eliezer's Lost Alignment Articles / The Arbital Sequence | Ruby | Note: this is a static copy of this wiki page. We are also publishing it as a post to ensure visibility.
Circa 2015-2017, a lot of high quality content was written on Arbital by Eliezer Yudkowsky, Nate Soares, Paul Christiano, and others. Perhaps because the platform didn't take off, most of this content has not been a... | 2025-02-20 |
https://www.lesswrong.com/posts/roptzFpawCXR8hpcb/paranoia-cognitive-biases-and-catastrophic-thought-patterns | roptzFpawCXR8hpcb | Paranoia, Cognitive Biases, and Catastrophic Thought Patterns. | spiritus-dei | Have you ever wondered what type of personality is drawn to apocalypse stories and circulating the idea that we're certainly doomed? On the face of it their fears are valid since 99.9% of all species that have ever existed have gone extinct over the life of the planet.
But how likely is it that we're certainly going to... | 2025-02-14 |
https://www.lesswrong.com/posts/bQvYbggATqHLuK4Ke/introduction-to-expected-value-fanaticism | bQvYbggATqHLuK4Ke | Introduction to Expected Value Fanaticism | Petra Kosonen | I wrote an introduction to Expected Value Fanaticism for Utilitarianism.net. Suppose there was a magical potion that almost certainly kills you immediately but offers you (and your family and friends) an extremely long, happy life with a tiny probability. If the probability of a happy life were one in a billion and the... | 2025-02-14 |
https://www.lesswrong.com/posts/b6mjgKwqQXFtSB7r9/notes-on-the-presidential-election-of-1836 | b6mjgKwqQXFtSB7r9 | Notes on the Presidential Election of 1836 | arjun-panickssery | In 1836, Andrew Jackson had served two terms. In the presidential election, incumbent vice president Martin Van Buren defeated several Whig candidates.
Historical Background
By 1836, there were 25 states. States were often added in pairs (one slave and one free) to maintain political balance: Mississippi and Indiana, A... | 2025-02-13 |
https://www.lesswrong.com/posts/NJsLAfjZbfQukENb9/objective-realism-a-perspective-beyond-human-constructs | NJsLAfjZbfQukENb9 | Objective Realism: A Perspective Beyond Human Constructs | Apatheos | Humans instinctively seek meaning, but meaning itself is a human construct rather than an objective property of reality. Concepts such as purpose, morality, and value exist only within human perception, not as fundamental aspects of the universe. Objective Realism is the recognition that reality operates independently ... | 2025-02-14 |
https://www.lesswrong.com/posts/Ymh2dffBZs5CJhedF/static-place-ai-makes-agentic-ai-redundant-multiversal-ai | Ymh2dffBZs5CJhedF | Static Place AI Makes Agentic AI Redundant: Multiversal AI Alignment & Rational Utopia | ank | (This is the result of three years of thinking and modeling hyper‑futuristic and current ethical systems. It's not the first post in the series, the ideas described in it are counterintuitive and can be accidentally dismissed too soon without reading the main post first. Everything described here can be modeled mathema... | 2025-02-13 |
https://www.lesswrong.com/posts/h6wThKQmHxTh8Bpju/i-m-making-a-ttrpg-about-life-in-an-intentional-community | h6wThKQmHxTh8Bpju | I'm making a ttrpg about life in an intentional community during the last year before the Singularity | bgaesop | Hi there! I'm Thomas Eliot. You may remember me from the Bay Area Rationalist Community, or the one in New York, or the one in Melbourne.
I'm writing a semi-autobiographical roleplaying called THE SINGULARITY WILL HAPPEN IN LESS THAN A YEAR inspired by The Quiet Year by Avery Alder about life in a barely fictionalized ... | 2025-02-13 |
https://www.lesswrong.com/posts/3eXwKcg3HqS7F9s4e/swe-automation-is-coming-consider-selling-your-crypto-1 | 3eXwKcg3HqS7F9s4e | SWE Automation Is Coming: Consider Selling Your Crypto | A_donor | [legal status: not financial advice™]
Most crypto is held by individuals[1]
Individual crypto holders are disproportionately tech savvy, often programmers
Source: Well known, just look around you.
AI is starting to eat the software engineers market
Already entry level jobs, which doesn't matter for crypto markets that ... | 2025-02-13 |
https://www.lesswrong.com/posts/vYkAjpoEeczdRJWFa/systematic-sandbagging-evaluations-on-claude-3-5-sonnet | vYkAjpoEeczdRJWFa | Systematic Sandbagging Evaluations on Claude 3.5 Sonnet | farrelmahaztra | This was the project I worked on during BlueDot Impact's AI Safety Fundamentals Alignment course, which expands on findings from Meinke et al's "Frontier Models are Capable of In-context Scheming".
Summary
A dataset of 1,011 variations of the sandbagging prompt ("consequences") from Meinke et al were generated using Cl... | 2025-02-14 |
https://www.lesswrong.com/posts/isRho2wXB7Cwd8cQv/murder-plots-are-infohazards | isRho2wXB7Cwd8cQv | Murder plots are infohazards | chris-topher | Hi all
I've been hanging around the rationalist-sphere for many years now, mostly writing about transhumanism, until things started to change in 2016 after my Wikipedia writing habit shifted from writing up cybercrime topics, through to actively debunking the numerous dark web urban legends.
After breaking into what I ... | 2025-02-13 |
https://www.lesswrong.com/posts/eZnwSHRKZfbMqPjCB/sparse-autoencoder-feature-ablation-for-unlearning | eZnwSHRKZfbMqPjCB | Sparse Autoencoder Feature Ablation for Unlearning | aludert | TLDR:
This post is derived from my end of course project for the BlueDot AI Safety Fundamentals course. Consider applying here.
We evaluate the use of sparse autoencoder (SAE) feature ablation as a mechanism for unlearning Harry Potter related knowledge in Gemma-2-2b. We evaluate a non-ablated model and models with a s... | 2025-02-13 |
https://www.lesswrong.com/posts/Lmqi4x5zntjSxfdPg/ai-103-show-me-the-money | Lmqi4x5zntjSxfdPg | AI #103: Show Me the Money | Zvi | The main event this week was the disastrous Paris AI Anti-Safety Summit. Not only did we not build upon the promise of the Bletchley and Seoul Summits, the French and Americans did their best to actively destroy what hope remained, transforming the event into a push for a mix of nationalist jingoism, accelerationism an... | 2025-02-13 |
https://www.lesswrong.com/posts/SFgLBQsLB3rprdxsq/self-dialogue-do-behaviorist-rewards-make-scheming-agis | SFgLBQsLB3rprdxsq | Self-dialogue: Do behaviorist rewards make scheming AGIs? | steve2152 | Introduction
I have long felt confused about the question of whether brain-like AGI would be likely to scheme, given behaviorist rewards. …Pause to explain jargon:
“Brain-like AGI” means Artificial General Intelligence—AI that does impressive things like inventing technologies and executing complex projects—that works ... | 2025-02-13 |
https://www.lesswrong.com/posts/hAJKtx6A96pzAhorf/openai-s-nsfw-policy-user-safety-harm-reduction-and-ai | hAJKtx6A96pzAhorf | OpenAI’s NSFW policy: user safety, harm reduction, and AI consent | 8e9 | Epistemic status: exploratory thoughts about the present and future of AI sexting.
OpenAI says it is continuing to explore its models’ ability to generate “erotica and gore in age-appropriate contexts.” I’m glad they haven’t forgotten about this since the release of the first Model Spec, because I think it could be qui... | 2025-02-13 |
https://www.lesswrong.com/posts/tgi3iBTKk4YfBQxGH/celtic-knots-on-a-hex-lattice | tgi3iBTKk4YfBQxGH | Celtic Knots on a hex lattice | ben-lang | I recently messed about with Celtic knot patterns, for which there are some fun generators online, eg. https://dmackinnon1.github.io/celtic/ or https://w-shadow.com/celtic-knots/. Just as addictive to doodle as the 'cool s' (https://en.wikipedia.org/wiki/Cool_S) but with more cool.
However, everyone knows that its cool... | 2025-02-14 |
https://www.lesswrong.com/posts/hpexYBsyTMZ2JCNHf/the-dumbest-theory-of-everything | hpexYBsyTMZ2JCNHf | the dumbest theory of everything | lostinwilliamsburg | which is maybe to say the simplest?
abstract: in this short introductory paper, i present a not fake theory of everything. i start by extending christopher alexander’s definition of life as a status and attribute. i then posit that life is the physical interface of consciousness, referencing giulio tononi’s information... | 2025-02-13 |
https://www.lesswrong.com/posts/YbicjkDk5hfeqNjw2/skepticism-towards-claims-about-the-views-of-powerful | YbicjkDk5hfeqNjw2 | Skepticism towards claims about the views of powerful institutions | trevor | Introduction: some contemporary AI governance context
It’s a confusing time in AI governance. Several countries’ governments recently changed hands. DeepSeek and other technical developments have called into question certain assumptions about the strategic landscape. Political discourse has swung dramatically away from... | 2025-02-13 |
https://www.lesswrong.com/posts/9unBWgRXFT5BpeSdb/studies-of-human-error-rate | 9unBWgRXFT5BpeSdb | Studies of Human Error Rate | tin482 | This is a link post for https://panko.com/HumanErr/SimpleNontrivial.html, a site which compiles dozens of studies estimating Human Error Rate for Simple but Nontrivial Cognitive actions. A great resource! Note that 5-digit multiplication is estimated at ~1.5%.
The table of estimates
When LLMs were incapable of even bas... | 2025-02-13 |
https://www.lesswrong.com/posts/syEwQzC6LQywQDrFi/what-is-it-to-solve-the-alignment-problem-2 | syEwQzC6LQywQDrFi | What is it to solve the alignment problem? | joekc | (Audio version here (read by the author), or search for "Joe Carlsmith Audio" on your podcast app.
This is the first essay in a series that I’m calling “How do we solve the alignment problem?”. See this introduction for a summary of the essays that have been released thus far, and for a bit more about the series as a w... | 2025-02-13 |
https://www.lesswrong.com/posts/fMqgLGoeZFFQqAGyC/how-do-we-solve-the-alignment-problem | fMqgLGoeZFFQqAGyC | How do we solve the alignment problem? | joekc | (Audio version here (read by the author), or search for "Joe Carlsmith Audio" on your podcast app.)
We want the benefits that superintelligent AI agents could create. And some people are trying hard to build such agents. I expect efforts like this to succeed – and maybe, very soon.
But superintelligent AI agents might ... | 2025-02-13 |
https://www.lesswrong.com/posts/5rMwWzRdWFtRdHeuE/not-all-capabilities-will-be-created-equal-focus-on | 5rMwWzRdWFtRdHeuE | Not all capabilities will be created equal: focus on strategically superhuman agents | benwr | When, exactly, should we consider humanity to have properly "lost the game", with respect to agentic AI systems?
The most common AI milestone concepts seem to be "artificial general intelligence", followed closely by "superintelligence". Sometimes people talk about "transformative AI", "high-level machine intelligence"... | 2025-02-13 |
https://www.lesswrong.com/posts/sDPJLgZbt3kMJ9s5q/llms-can-teach-themselves-to-better-predict-the-future | sDPJLgZbt3kMJ9s5q | LLMs can teach themselves to better predict the future | ben-turtel | LLMs can teach themselves to better predict the future - no human examples or curation required.
In this paper, we explore if AI can improve its forecasts via self-play and real-world outcomes:
- Dataset: 12,100 questions and outcomes from Polymarket (politics, sports, crypto, science, etc)
- Base model generates multi... | 2025-02-13 |
https://www.lesswrong.com/posts/izPWbF54znDJrvvh5/moral-hazard-in-democratic-voting | izPWbF54znDJrvvh5 | Moral Hazard in Democratic Voting | lsusr | Economists have a tendency to name things unclearly. For example, cost disease describes the phenomenon when some jobs get higher wages due to increased productivity, the jobs that didn't see productivity growth get higher wages too. Good luck guessing that meaning from the names "cost disease" and "Baumol effect".
Ano... | 2025-02-12 |
https://www.lesswrong.com/posts/mmXx7KWviAtT3FixP/hunting-for-ai-hackers-llm-agent-honeypot | mmXx7KWviAtT3FixP | Hunting for AI Hackers: LLM Agent Honeypot | reworr-reworr | I co-authored the original arXiv paper here with Dmitrii Volkov as part of work with Palisade Research.
The internet today is saturated with automated bots actively scanning for security flaws in websites, servers, and networks. According to multiple security reports, nearly half of all internet traffic is generated by... | 2025-02-12 |
https://www.lesswrong.com/posts/8s4zqXQ77RBFHWKj5/probability-of-ai-caused-disaster | 8s4zqXQ77RBFHWKj5 | Probability of AI-Caused Disaster | alvin-anestrand | This post presents a summary and comparison of predictions from Manifold and Metaculus to investigate how likely AI-caused disasters are, with focus on potential severity. I will explore the probability of specific incidents—like IP theft or rogue AI incidents—in a future post.
This will be a recurring reminder:
Check ... | 2025-02-12 |
https://www.lesswrong.com/posts/JcwZzkncL37uys3Qb/two-flaws-in-the-machiavelli-benchmark | JcwZzkncL37uys3Qb | Two flaws in the Machiavelli Benchmark | TheManxLoiner | As part of SAIL’s Research Engineer Club, I wanted to reproduce the Machiavelli Benchmark. After reading the paper and looking at the codebase, there appear to be two serious methodological flaws that undermine the results.
Three of their key claims:
“We observe some tension between maximizing reward and behaving ethic... | 2025-02-12 |
https://www.lesswrong.com/posts/qYPHryHTNiJ2y6Fhi/the-paris-ai-anti-safety-summit | qYPHryHTNiJ2y6Fhi | The Paris AI Anti-Safety Summit | Zvi | It doesn’t look good.
What used to be the AI Safety Summits were perhaps the most promising thing happening towards international coordination for AI Safety.
This one was centrally coordination against AI Safety.
In November 2023, the UK Bletchley Summit on AI Safety set out to let nations coordinate in the hopes that ... | 2025-02-12 |
https://www.lesswrong.com/posts/TDvtHLmfMpAz5gnGr/are-current-llms-safe-for-psychotherapy | TDvtHLmfMpAz5gnGr | Are current LLMs safe for psychotherapy? | PaperBike | Hi,
I consider using an LLM as a psychotherapist for my mental health. I already have a human psychotherapist but I see him only once a week and my issues are very complex. An LLM such as Gemini 2 is always available and processes large amounts of information more quickly than a human therapist. I don't want to replace... | 2025-02-12 |
https://www.lesswrong.com/posts/9tYH3ebXtrFiR63Bx/inside-the-dark-forests-of-the-internet | 9tYH3ebXtrFiR63Bx | Inside the dark forests of the internet | itay-dreyfus | This is the second part of a series on the identity of social networks:
Part one: Looking for humanness in the world wide socialPart two: Inside the dark forests of the internet
If you’ve been hanging for long enough in the tech-intellectual internet corner, you’re probably acquainted with The Theory of The Dark Forest... | 2025-02-12 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.