url stringlengths 52 124 | post_id stringlengths 17 17 | title stringlengths 2 248 | author stringlengths 2 49 | content stringlengths 22 295k ⌀ | date stringclasses 376
values |
|---|---|---|---|---|---|
https://www.lesswrong.com/posts/sjqBe4E67jJqzf7vF/boston-s-line-1 | sjqBe4E67jJqzf7vF | Boston's Line 1 | jkaufman | Over on r/mbta people
were
speculating: if Boston's rapid transit lines were numbered, like
some
other
cities,
which line would get the honor of being first? The
Orange
Line, as the main line of the historical
Boston
Elevated Railway? The
Red Line
,
with the highest (pre-covid) ridership? The
Green Line,
as the olde... | 2024-03-04 |
https://www.lesswrong.com/posts/JbE7KynwshwkXPJAJ/anthropic-release-claude-3-claims-greater-than-gpt-4 | JbE7KynwshwkXPJAJ | Anthropic release Claude 3, claims >GPT-4 Performance | LawChan | Today, we're announcing the Claude 3 model family, which sets new industry benchmarks across a wide range of cognitive tasks. The family includes three state-of-the-art models in ascending order of capability: Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus. Each successive model offers increasingly powerful perform... | 2024-03-04 |
https://www.lesswrong.com/posts/7LnHFj4gs5Zd4WKcu/notes-on-awe | 7LnHFj4gs5Zd4WKcu | Notes on Awe | David_Gross | This post examines the virtue related to awe. As with my other posts in this sequence, I’m less interested in breaking new ground and more in gathering and synthesizing whatever wisdom I could find on the subject. I wrote this not as an expert on the topic, but as someone who wants to learn more about it. I hope it wil... | 2024-03-04 |
https://www.lesswrong.com/posts/nkLtFTPs8gvCKutS3/interview-stakeout-ai-w-dr-peter-park | nkLtFTPs8gvCKutS3 | INTERVIEW: StakeOut.AI w/ Dr. Peter Park | jacobhaimes | Hey everyone! This week's episode of the Into AI Safety podcast is an interview with Dr. Peter Park. Along with Harry Luk and one other cofounder, he started StakeOut.AI, a non-profit with the goal of making AI go well, for humans.
Unfortunately, due to funding pressures, the organization recently had to dissolve, but ... | 2024-03-04 |
https://www.lesswrong.com/posts/m8ahbiumz8C9mnGnp/housing-roundup-7 | m8ahbiumz8C9mnGnp | Housing Roundup #7 | Zvi | Legalize housing. It is both a good slogan and also a good idea.
The struggle is real, ongoing and ever-present. Do not sleep on it. The Housing Theory of Everything applies broadly, even to the issue of AI. If we built enough housing that life vastly improved and people could envision a positive future, they would be ... | 2024-03-04 |
https://www.lesswrong.com/posts/4ZGHderZEEmQuCvxR/exploring-the-evolution-and-migration-of-different-layer | 4ZGHderZEEmQuCvxR | Exploring the Evolution and Migration of Different Layer Embedding in LLMs | sprout_ust | [Edit on 17th Mar] After conducting experiments on more data points (5000 texts) on the Pile dataset (more sample sources), we are confident that the experimental results described earlier are reliable. Therefore, we have opened the code.
Recently, we conducted several experiments focused on the evolution and migration... | 2024-03-08 |
https://www.lesswrong.com/posts/di4Dhho4xZ4x9ABna/are-we-so-good-to-simulate | di4Dhho4xZ4x9ABna | Are we so good to simulate? | KatjaGrace | If you believe that,—
a) a civilization like ours is likely to survive into technological incredibleness, and
b) a technologically incredible civilization is very likely to create ‘ancestor simulations’,
—then the Simulation Argument says you should expect that you are currently in such an ancestor simulation, rather t... | 2024-03-04 |
https://www.lesswrong.com/posts/x5CNievhunvBjJAC9/the-broken-screwdriver-and-other-parables | x5CNievhunvBjJAC9 | The Broken Screwdriver and other parables | bhauth | previously: The Parable Of The Fallen Pendulum
The Broken Screwdriver
Alice: Hey Bob, I need something to put this screw in the wall.
Bob: OK, here's a screwdriver.
Alice starts trying to hammer a screw in using the butt of the screwdriver.
Alice: I think this screwdriver is broken.
Bob: You're not using it correctly, ... | 2024-03-04 |
https://www.lesswrong.com/posts/unG2MpHFdzbfdSbxY/grief-is-a-fire-sale | unG2MpHFdzbfdSbxY | Grief is a fire sale | Nathan Young | Written in Nov 2023.
For me, grief is often about the future. It isn't about the loss of past times, since those were already gone. It is the loss of hope. It's missing the last train to a friends wedding or never being able to hear another of my grandfathers sarcastic quips.
In the moment of grief, it is the feeling o... | 2024-03-04 |
https://www.lesswrong.com/posts/pBAre8ir5YorcRBet/good-hpmor-scenes-passages | pBAre8ir5YorcRBet | Good HPMoR scenes / passages? | PhilGoetz | I'm doing a reading of good fan-fiction at a con this weekend, to counter the many "bad fanfic reading" panels. I want to read an interesting passage from HPMoR, but I can't remember any particular passage myself, and I don't want to re-read the whole thing this week. Can anyone remember any scene or passage that stu... | 2024-03-03 |
https://www.lesswrong.com/posts/7RBbwqHoimj92MRnL/social-status-part-2-2-everything-else | 7RBbwqHoimj92MRnL | Social status part 2/2: everything else | steve2152 | 2.1 Post summary / Table of contents
This is the second of two blog posts where I try to make sense of the whole universe of social-status-related behaviors and phenomena. The previous one was: “Social status part 1/2: negotiations over object-level preferences”.
In that previous post, I was focusing on the simplified ... | 2024-03-05 |
https://www.lesswrong.com/posts/yNSyYJTKboKKiEQEE/attending-sold-out-beantown-stomp | yNSyYJTKboKKiEQEE | Attending Sold-Out Beantown Stomp | jkaufman | Beantown Stomp, the Boston
contra dance weekend I helped start in 2018, is two weeks out. It sold out
a
month ago, which is good and bad: it's great that lots of people
are coming, but not that there are many more who won't be able to.
Someone in the latter category wrote to me, which got me thinking
about two options ... | 2024-03-03 |
https://www.lesswrong.com/posts/gjncY9CBeit28DssW/ai-things-that-are-perhaps-as-important-as-human-controlled | gjncY9CBeit28DssW | AI things that are perhaps as important as human-controlled AI | Chi Nguyen | null | 2024-03-03 |
https://www.lesswrong.com/posts/XfX4WT4T4Fkh4JpNY/a-tedious-and-effective-way-to-learn-chinese-characters | XfX4WT4T4Fkh4JpNY | A tedious and effective way to learn 汉字 (Chinese characters) | dkl9 | Sometimes I look up Chinese words on my phone, usually on Wiktionary. Any foreign-enough characters (including all 汉字, i.e. Chinese characters) show up in my phone's web browser as mutually-identical blank boxes (tofu). If I want to see the actual form of the 汉字 — I usually do — I must find images, rather than text.
Wi... | 2024-03-03 |
https://www.lesswrong.com/posts/z8F7yA63m9nonzEBv/if-you-controlled-the-first-agentic-agi-what-would-you-set | z8F7yA63m9nonzEBv | If you controlled the first agentic AGI, what would you set as its first task(s)? | sweenesm | (If you work for a company that’s trying to develop AGI, I suggest you don’t publicly answer this question lest the media get ahold of it.)
(Let’s assume you’ve “aligned” this AGI and done significant sandbox testing before you let it loose with its first task(s). If you’d like to change or add to these assumptions for... | 2024-03-03 |
https://www.lesswrong.com/posts/DmxGYLmoueewAzp4r/anomalous-concept-detection-for-detecting-hidden-cognition | DmxGYLmoueewAzp4r | Anomalous Concept Detection for Detecting Hidden Cognition | paul-colognese | Thanks to Johannes Treutlein, Erik Jenner, Joseph Bloom, and Arun Jose for their discussions and feedback.
Summary
Monitoring an AI’s internals for features/concepts unrelated to the task the AI appears to be performing may help detect when the AI is performing hidden cognition. For example, it would be very suspicious... | 2024-03-04 |
https://www.lesswrong.com/posts/tJmpsEevCcEfL6a7Z/self-resolving-prediction-markets | tJmpsEevCcEfL6a7Z | Self-Resolving Prediction Markets | PeterMcCluskey | Back in 2008, I criticized the book
Predictocracy
for proposing prediction markets whose contracts would be resolved
without reference to ground truth.
Recently, Srinivasan, Karger, and Chen (SKC) published more scholarly
paper titled Self-Resolving
Prediction Markets for Unverifiable Outcomes.
Manipulation
In the naiv... | 2024-03-03 |
https://www.lesswrong.com/posts/f6m7mC9F9r4fEhnaP/increase-the-tax-value-of-donations-with-high-variance | f6m7mC9F9r4fEhnaP | Increase the tax value of donations with high-variance investments? | korin43 | The United States has a strange (legal) tax loophole where you can double-count capital gains when donating securies (with some restrictions): If you buy a stock, the values goes up, and you've held it for at least a year, you can donate it and claim a tax deduction on the current market value instead of the value of w... | 2024-03-03 |
https://www.lesswrong.com/posts/EMmEFHrnGt3hfSxtZ/common-philosophical-mistakes-according-to-joe-schmid-videos | EMmEFHrnGt3hfSxtZ | Common Philosophical Mistakes, according to Joe Schmid [videos] | DanielFilan | A 7-part series of videos detailing mistakes one can make in philosophy, according to Joe Schmid. The first video focusses on general reasoning issues, and is therefore most likely to be of interest (for instance: suppose observation E is evidence for A, and A implies B - does that mean E is evidence for B?).
If you do... | 2024-03-03 |
https://www.lesswrong.com/posts/Q4yhuwzoy3kNRbv4m/agreeing-with-stalin-in-ways-that-exhibit-generally | Q4yhuwzoy3kNRbv4m | Agreeing With Stalin in Ways That Exhibit Generally Rationalist Principles | Zack_M_Davis | It was not the sight of Mitchum that made him sit still in horror. It was the realization that there was no one he could call to expose this thing and stop it—no superior anywhere on the line, from Colorado to Omaha to New York. They were in on it, all of them, they were doing the same, they had given Mitchum the lead ... | 2024-03-02 |
https://www.lesswrong.com/posts/ZyEfeJK2F7FKcRCmc/the-world-in-2029 | ZyEfeJK2F7FKcRCmc | The World in 2029 | Nathan Young | Links are to prediction markets, subscripts/brackets are my own forecasts, done rapidly.
I open my eyes. It’s nearly midday. I drink my morning Huel. How do I feel? My life feels pretty good. AI progress is faster than ever, but I've gotten used to the upward slope by now. There has perhaps recently been a huge recessi... | 2024-03-02 |
https://www.lesswrong.com/posts/NAjBxEoSPMcYDM5WF/in-defense-of-anthropically-updating-edt | NAjBxEoSPMcYDM5WF | In defense of anthropically updating EDT | antimonyanthony | Suppose you’re reflecting on your views on two thorny topics: decision theory and anthropics.
Considering decision problems that don’t involve anthropics (i.e., don’t involve inferences about the world from indexical information), you might find yourself very sympathetic to evidential decision theory (EDT).[1]And, cons... | 2024-03-05 |
https://www.lesswrong.com/posts/dDJcNypLDE5BGnPd2/the-most-dangerous-idea | dDJcNypLDE5BGnPd2 | The Most Dangerous Idea | rogersbacon | Previously: Epistemic Hell, The Journal of Dangerous Ideas
Scott Mutter
We may safely predict that it will be the timidity of our hypotheses, and not their extravagance, which will provoke the derision of posterity. (H. H. Price)
Introduction
Jeffrey Kripal has written extensively in recent years about what he calls th... | 2024-03-02 |
https://www.lesswrong.com/posts/pKsPpxXDjoP6eGfmA/future-life | pKsPpxXDjoP6eGfmA | Future life | DavidMadsen | Let's say we were to create a neuromorphic AI (mainly talking about brain emulation) whose goal is to find out everything that is true in the universe, and that has fun and feels good while doing so.
Some may despair that it would lack humanity, but everything good about humanity is good in itself, and not because us h... | 2024-03-02 |
https://www.lesswrong.com/posts/YZLo9wN9YhjHQqayH/ugo-conti-s-whistle-controlled-synthesizer | YZLo9wN9YhjHQqayH | Ugo Conti's Whistle-Controlled Synthesizer | jkaufman | Whistling is very nearly a pure sound wave, and a high one at that,
which can sound thin and obnoxious. A few years ago I
got excited about making whistling sound
better by bringing it down a few octaves and synthesizing overtones,
and built something that I now use when playing live for dancing. And
then, sitting in ... | 2024-03-02 |
https://www.lesswrong.com/posts/YTZA2foxJcJZqbbNA/a-one-sentence-formulation-of-the-ai-x-risk-argument-i-try | YTZA2foxJcJZqbbNA | A one-sentence formulation of the AI X-Risk argument I try to make | tcelferact | null | 2024-03-02 |
https://www.lesswrong.com/posts/wwioAJHTeaGqBvtjd/update-on-developing-an-ethics-calculator-to-align-an-agi-to | wwioAJHTeaGqBvtjd | Update on Developing an Ethics Calculator to Align an AGI to | sweenesm | TL;DR: This is an update on my progress towards creating an “ethics calculator” that could be used to help align an AGI to act ethically. In its first iteration, the calculator uses a utilitarian framework with “utility” measured in terms of value as net “positive” experiences, with the value of rights explicitly inclu... | 2024-03-12 |
https://www.lesswrong.com/posts/uJmWiethBsCnq68pg/if-you-weren-t-such-an-idiot | uJmWiethBsCnq68pg | If you weren't such an idiot... | kave | My friend Buck once told me that he often had interactions with me that felt like I was saying “If you weren’t such a fucking idiot, you would obviously do…” Here’s a list of such advice in that spirit.
Note that if you do/don’t do these things, I’m technically calling you an idiot, but I do/don’t do a bunch of them to... | 2024-03-02 |
https://www.lesswrong.com/posts/h6kChrecznGD4ikqv/increasing-iq-is-trivial | h6kChrecznGD4ikqv | Increasing IQ is trivial | George3d6 | TL;DR - It took me about 14 days to increase my IQ by 13 points, in a controlled experiment that involved no learning, it was a relatively pleasant process, more people should be doing this.
A common cliche in many circles is that you can’t increase IQ.
This is obviously false, the largest well-documented increase in I... | 2024-03-01 |
https://www.lesswrong.com/posts/pZvnoW4EiYo9nu3MT/elon-files-grave-charges-against-openai | pZvnoW4EiYo9nu3MT | Elon files grave charges against OpenAI | MakoYass | (CN) — Elon Musk says in a Thursday lawsuit that Sam Altman and OpenAI have betrayed an agreement from the artificial intelligence research company's founding to develop the technology for the benefit of humanity rather than profit.
In the suit filed Thursday night in San Francisco Superior Court, Musk claims OpenAI's ... | 2024-03-01 |
https://www.lesswrong.com/posts/xmegeW5mqiBsvoaim/we-inspected-every-head-in-gpt-2-small-using-saes-so-you-don | xmegeW5mqiBsvoaim | We Inspected Every Head In GPT-2 Small using SAEs So You Don’t Have To | Technoguyrob | This is an interim report that we are currently building on. We hope this update will be useful to related research occurring in parallel. Produced as part of the ML Alignment & Theory Scholars Program - Winter 2023-24 Cohort
Executive Summary
In a previous post we trained attention SAEs on every layer of GPT-2 Small a... | 2024-03-06 |
https://www.lesswrong.com/posts/b5eoocpqedkp9RazL/notes-on-dwarkesh-patel-s-podcast-with-demis-hassabis | b5eoocpqedkp9RazL | Notes on Dwarkesh Patel’s Podcast with Demis Hassabis | Zvi | Demis Hassabis was interviewed twice this past week.
First, he was interviewed on Hard Fork. Then he had a much more interesting interview with Dwarkesh Patel.
This post covers my notes from both interviews, mostly the one with Dwarkesh.
Hard Fork
Hard Fork was less fruitful, because they mostly asked what for me are t... | 2024-03-01 |
https://www.lesswrong.com/posts/qeRvKss7nwhxrR4mk/what-does-your-philosophy-maximize | qeRvKss7nwhxrR4mk | What does your philosophy maximize? | darustc4 | The universe is vast and complex, and we like to take mental refuge from this vastness by following some philosophical school. These philosophical schools come in many different shapes and sizes, and while listing them all is quite impractical, there are three that I often hear in discussions nowadays: theism, atheism,... | 2024-03-01 |
https://www.lesswrong.com/posts/4NmhhCiiCG8JAhQrP/the-defence-production-act-and-ai-policy | 4NmhhCiiCG8JAhQrP | The Defence production act and AI policy | Unknown | Quick Summary
Gives the President wide-ranging powers to strengthen the US industrial base Has been around without changing that much since 1953Has provisions which allow firms to make voluntary agreements that would normally be illegal under antitrust law Provided the legal authority for many of the provisions in Bide... | 2024-03-01 |
https://www.lesswrong.com/posts/C6Sm4gfSFSKzwi5Mq/don-t-endorse-the-idea-of-market-failure | C6Sm4gfSFSKzwi5Mq | Don't Endorse the Idea of Market Failure | maxwell-tabarrok | In a fiery, though somewhat stilted speech with long pauses for translation, Javier Milei delivered this final message to a cheering crowd at the Conservative Political Action Conference last week:
Don't let socialism advance. Don't endorse regulations. Don't endorse the idea of market failure. Don't allow the advance ... | 2024-03-01 |
https://www.lesswrong.com/posts/Cyfyjm9LiadiGLszh/is-it-possible-to-make-more-specific-bookmarks | Cyfyjm9LiadiGLszh | Is it possible to make more specific bookmarks? | numpyNaN | Pretty much what the title says, I've only seen a "general" bookmark option for posts (either it is bookmarked or it isn't) but even in my own bookmarks I don't seem to be able to group them in a way that might be useful i.e, can't separate "Things I want to have saved somewhere because they are really good/insightful"... | 2024-03-01 |
https://www.lesswrong.com/posts/hzRtd73TCYo23A7PZ/wholesome-culture | hzRtd73TCYo23A7PZ | Wholesome Culture | owencb | null | 2024-03-01 |
https://www.lesswrong.com/posts/gPGmfY4QKFFa9x2Fd/adding-sensors-to-mandolin | gPGmfY4QKFFa9x2Fd | Adding Sensors to Mandolin? | jkaufman | I'd like to be able to play mandolin with my hands, drums with my
feet, and also choose the current bass note. Currently the closest I
can do is reach my left hand over to the piano to play a note, but
while this is possible with tunes that are not very notey and/or use a
lot of open strings it's pretty awkward:
Inste... | 2024-03-01 |
https://www.lesswrong.com/posts/BzCQHnt7z8qvzqCmi/the-parable-of-the-fallen-pendulum-part-1 | BzCQHnt7z8qvzqCmi | The Parable Of The Fallen Pendulum - Part 1 | johnswentworth | One day a physics professor presents the standard physics 101 material on gravity and Newtonian mechanics: g = 9.8 m/s^2, sled on a ramp, pendulum, yada yada.
Later that week, the class has a lab session. Based on the standard physics 101 material, they calculate that a certain pendulum will have a period of approximat... | 2024-03-01 |
https://www.lesswrong.com/posts/qgpuDpvererifr8ou/gradations-of-moral-weight | qgpuDpvererifr8ou | Gradations of moral weight | MichaelStJules | null | 2024-02-29 |
https://www.lesswrong.com/posts/aH3naZBoEHChF7TBH/antagonistic-ai | aH3naZBoEHChF7TBH | Antagonistic AI | xybermancer | But I don’t want comfort. I want God, I want poetry, I want real danger, I want freedom, I want goodness. I want sin.” —Aldous Huxley
Most AIs are sycophants. What if we build antagonistic AI?
Colleagues & I just released a working paper wherein we argue that we should explore AI that are purposefully antagonistic towa... | 2024-03-01 |
https://www.lesswrong.com/posts/7tSthxSgnNxbt4Hk6/what-s-in-the-box-towards-interpretability-by-distinguishing-1 | 7tSthxSgnNxbt4Hk6 | What’s in the box?! – Towards interpretability by distinguishing niches of value within neural networks. | joshua-clancy | Abstract
Mathematical models can describe neural network architectures and training environments, however the learned representations that emerge have remained difficult to model. Here we build a new theoretical model of internal representations. We do this via an economic and information theory framing. We distinguish... | 2024-02-29 |
https://www.lesswrong.com/posts/vy4zCEyvXphBvmsKS/short-post-discerning-truth-from-trash | vy4zCEyvXphBvmsKS | Short Post: Discerning Truth from Trash | FinalFormal2 | I have read a lot of self-help in my time, which means I’ve also read a whole lot of bullshit. Worst than the actual process of ingesting and attempting to digest self-help bullshit was finding that my own writing started to produce features similar to self-help bullshit.
I would write advice that I had not attempted, ... | 2024-02-29 |
https://www.lesswrong.com/posts/FcaqbuYbPdesdkWiH/ai-53-one-more-leap | FcaqbuYbPdesdkWiH | AI #53: One More Leap | Zvi | The main event continues to be the fallout from The Gemini Incident. Everyone is focusing there now, and few are liking what they see.
That does not mean other things stop. There were two interviews with Demis Hassabis, with Dwarkesh Patel’s being predictably excellent. We got introduced to another set of potentially h... | 2024-02-29 |
https://www.lesswrong.com/posts/C8WeunwWfiqLu4R7r/cryonics-p-success-estimates-are-only-weakly-associated-with | C8WeunwWfiqLu4R7r | Cryonics p(success) estimates are only weakly associated with interest in pursuing cryonics in the LW 2023 Survey | Andy_McKenzie | The Less Wrong 2023 survey results are out. As usual, it includes some questions about cryonics. One is about what people’s level of interest in cryonics is (not interested, considering, cryocrastinating, signed up, etc.). Another asks about people’s subjective probability of successful restoration to life in the futur... | 2024-02-29 |
https://www.lesswrong.com/posts/edvyWfKdJHnoPkM2J/bengio-s-alignment-proposal-towards-a-cautious-scientist-ai | edvyWfKdJHnoPkM2J | Bengio's Alignment Proposal: "Towards a Cautious Scientist AI with Convergent Safety Bounds" | mattmacdermott | Yoshua Bengio recently posted a high-level overview of his alignment research agenda on his blog. I'm pasting the full text below since it's fairly short.
What can’t we afford with a future superintelligent AI? Among others, confidently wrong predictions about the harm that some actions could yield. Especially catastro... | 2024-02-29 |
https://www.lesswrong.com/posts/zbJTbvFm3rAAyFSky/against-augmentation-of-intelligence-human-or-otherwise-an | zbJTbvFm3rAAyFSky | Against Augmentation of Intelligence, Human or Otherwise (An Anti-Natalist Argument) | Benjamin Bourlier | “…genetic engineering by itself could result in a future of incredible prosperity with far less suffering than exists in the world today.” – GeneSmith, December 2023
“Human intelligence augmentation needs to be in the mix. In particular, you have to augment people at least to the point where they automatically acquire ... | 2024-03-01 |
https://www.lesswrong.com/posts/xbuagojQmjucZdWPB/supposing-the-1bit-llm-paper-pans-out | xbuagojQmjucZdWPB | Supposing the 1bit LLM paper pans out | o-o | https://arxiv.org/abs/2402.17764 claims that 1 bit LLMs are possible.
If this scales, I'd imagine there is a ton of speedup to unlock since our hardware has been optimized for 1 bit operations for decades. What does this imply for companies like nvidia and the future of LLM inference/training?
Do we get another leap in... | 2024-02-29 |
https://www.lesswrong.com/posts/X8NhKh2g2ECPrm5eo/post-series-on-liability-law-for-reducing-existential-risk | X8NhKh2g2ECPrm5eo | Post series on "Liability Law for reducing Existential Risk from AI" | Nora_Ammann | Gabriel Weil (Assistant Professor of Law, Touro University Law Center) wrote this post series on the role of Liability Law for reducing Existential Risk from AI. I think this may well be of interest to some people here, so wanted for a linkpost to exist.
The first post argues that Tort Law Can Play an Important Role i... | 2024-02-29 |
https://www.lesswrong.com/posts/K2F9g2aQubd7kwEr3/approaching-human-level-forecasting-with-language-models-2 | K2F9g2aQubd7kwEr3 | Approaching Human-Level Forecasting with Language Models | fred-zhang | TL;DR: We present a retrieval-augmented LM system that nears the human crowd performance on judgemental forecasting.
Paper: https://arxiv.org/abs/2402.18563 (Danny Halawi*, Fred Zhang*, Chen Yueh-Han*, and Jacob Steinhardt)
Twitter thread: https://twitter.com/JacobSteinhardt/status/1763243868353622089
Abstract
Forecast... | 2024-02-29 |
https://www.lesswrong.com/posts/mJ69ZLXJKfqDFu8ir/tour-retrospective-february-2024 | mJ69ZLXJKfqDFu8ir | Tour Retrospective February 2024 | jkaufman | Last week Kingfisher
went on
tour, driving down to DC and back. It was a pretty good time!
We planned the tour for the kids February vacation in case they wanted
to come, and Lily decided to join us. Alex Deis-Lauby was calling,
and she came with us as well. Getting the four of us, a keyboard, and
all of our stuff i... | 2024-02-29 |
https://www.lesswrong.com/posts/5Yjk6Aos3wL7HPNxH/locating-my-eyes-part-3-of-the-sense-of-physical-necessity | 5Yjk6Aos3wL7HPNxH | Locating My Eyes (Part 3 of "The Sense of Physical Necessity") | BrienneYudkowsky | This is the third post in a sequence that demonstrates a complete naturalist study, specifically a study of query hugging (sort of), as described in The Nuts and Bolts of Naturalism. This one demos phases one and two: Locating Fulcrum Experiences and Getting Your Eyes On. For context on this sequence, see the intro pos... | 2024-02-29 |
https://www.lesswrong.com/posts/zL5obvQMKLMFLiJaq/conspiracy-theorists-aren-t-ignorant-they-re-bad-at | zL5obvQMKLMFLiJaq | Conspiracy Theorists Aren't Ignorant. They're Bad At Epistemology.
| omnizoid | Cross-post of my blog article on the topic.
I probably know less about science than most people who think the earth is flat do.
Okay, that’s not quite true. I have knowledge of lots of claims of science—that the big bang happened, that evolution is true, that the earth is round, etc—that people who think the earth is f... | 2024-02-28 |
https://www.lesswrong.com/posts/Wd9vzwqcYuEokJYCH/paper-review-the-unreasonable-effectiveness-of-easy-training | Wd9vzwqcYuEokJYCH | Paper review: “The Unreasonable Effectiveness of Easy Training Data for Hard Tasks” | vassil-tashev | TL;DR: Scalable oversight seems easier based on experiments outlined in a recent paper; questions arise about the implications of these findings.
The following graciously provided feedback and advice on the draft, for which I am deeply grateful (in alphabetical order): Sawyer Bernath, Sam Bowman, Bogdan-Ionut Cirstea, ... | 2024-02-29 |
https://www.lesswrong.com/posts/CkWKm6kk4mHSqtvtj/discovering-alignment-windfalls-reduces-ai-risk | CkWKm6kk4mHSqtvtj | Discovering alignment windfalls reduces AI risk | goodgravy | Some approaches to AI alignment incur upfront costs to the creator (an “alignment tax”). In this post, I discuss “alignment windfalls” which are strategies that tend towards the long-term public good at the same time as reaping short-term benefits for a company.
My argument, in short:
Just as there are alignment taxes,... | 2024-02-28 |
https://www.lesswrong.com/posts/DXd3xGtrej9dthCpi/my-theory-of-the-industrial-revolution | DXd3xGtrej9dthCpi | my theory of the industrial revolution | bhauth | Why did the Industrial Revolution happen when it did? Why didn't it happen earlier, or in China or India? What were the key factors that weren't present elsewhere?
I have a theory about that which I haven't seen before, so I thought I'd post it.
steam power
One popular conception of the Industrial Revolution is that st... | 2024-02-28 |
https://www.lesswrong.com/posts/CHCeirkfaWLXBS7Br/wholesomeness-and-effective-altruism | CHCeirkfaWLXBS7Br | Wholesomeness and Effective Altruism | owencb | null | 2024-02-28 |
https://www.lesswrong.com/posts/KCKxCQyvim9uNAnSC/evidential-cooperation-in-large-worlds-potential-objections | KCKxCQyvim9uNAnSC | Evidential Cooperation in Large Worlds: Potential Objections & FAQ | Chi Nguyen | null | 2024-02-28 |
https://www.lesswrong.com/posts/3s8PtYbo7rCbho4Ev/notes-on-control-evaluations-for-safety-cases | 3s8PtYbo7rCbho4Ev | Notes on control evaluations for safety cases | ryan_greenblatt | The quality bar of this post is somewhat lower than that of our previous posts on control and this post is much more focused on specific details. This is particularly true about the appendices of this post. So, we only recommend reading for those who are quite interested in getting into the details of control evaluatio... | 2024-02-28 |
https://www.lesswrong.com/posts/KD4AMfaF3eeWdQwAC/corporate-governance-for-frontier-ai-labs-a-research-agenda | KD4AMfaF3eeWdQwAC | Corporate Governance for Frontier AI Labs: A Research Agenda | matthew-wearden | Thanks to all who provided feedback on this agenda for me, including Elliot Jones (Ada Lovelace Institute) and Jonas Schuett (GovAI). All opinions and errors are entirely my own.
0 – Executive Summary
Corporate governance underpins the proper functioning of businesses in all industries, from consumer goods, to finance,... | 2024-02-28 |
https://www.lesswrong.com/posts/hwmijAeWWNaBLPDjS/0th-person-and-1st-person-logic | hwmijAeWWNaBLPDjS | 0th Person and 1st Person Logic | adele-lopez-1 | Truth values in classical logic have more than one interpretation.
In 0th Person Logic, the truth values are interpreted as True and False.
In 1st Person Logic, the truth values are interpreted as Here and Absent relative to the current reasoner.
Importantly, these are both useful modes of reasoning that can coexist in... | 2024-03-10 |
https://www.lesswrong.com/posts/otiMZPuqagjScJvdj/how-ai-will-change-education | otiMZPuqagjScJvdj | How AI Will Change Education | robotelvis | Education in the US is a big big deal. It takes up 18-30 years of our lives, employs over 10% of our workforce, and is responsible for 60% of non-mortgage/non-car debt. Even a minor improvement to education could be a big deal.
Education is also something that has changed massively in recent decades. In 1930, only 19% ... | 2024-02-28 |
https://www.lesswrong.com/posts/pkCm8KzhZvtH8vX5t/band-lessons | pkCm8KzhZvtH8vX5t | Band Lessons? | jkaufman | Music lessons for individuals are common and normal, but what about
for bands? It sounds a bit unusual, but it's something I've gotten a
lot out of:
In early 2013, when the Free Raisins had been playing
for 2.5y and had played ~60 dances we had a session with Max Newman,
then of Nor'easter,
now of the Stringrays. He w... | 2024-02-28 |
https://www.lesswrong.com/posts/Hm9Q2J2jgXxyKuMcF/timestamping-through-the-singularity | Hm9Q2J2jgXxyKuMcF | timestamping through the Singularity | throwaway918119127 | Or, why ' petertodd' is a tightrope over the Abyss.
The omphalos hypothesis made real
What if, in the future, an AI corrupted all of history, rewriting every piece of physical evidence from a prior age down to a microscopic level in order to suit its (currently unknowable) agenda? With a sufficiently capable entity and... | 2024-02-28 |
https://www.lesswrong.com/posts/YsFZF3K9tuzbfrLxo/counting-arguments-provide-no-evidence-for-ai-doom | YsFZF3K9tuzbfrLxo | Counting arguments provide no evidence for AI doom | nora-belrose | Crossposted from the AI Optimists blog.
AI doom scenarios often suppose that future AIs will engage in scheming— planning to escape, gain power, and pursue ulterior motives, while deceiving us into thinking they are aligned with our interests. The worry is that if a schemer escapes, it may seek world domination to ensu... | 2024-02-27 |
https://www.lesswrong.com/posts/Xa9gF8sycMmJpALnQ/which-animals-realize-which-types-of-subjective-welfare | Xa9gF8sycMmJpALnQ | Which animals realize which types of subjective welfare? | MichaelStJules | null | 2024-02-27 |
https://www.lesswrong.com/posts/8gcFNJA4geePj5oXD/have-i-solved-the-two-envelopes-problem-once-and-for-all | 8gcFNJA4geePj5oXD | Have I Solved the Two Envelopes Problem Once and For All? | JackOfAllSpades | I was about today years old when I learned of the two envelopes problem during one of my not-so-unusual attempts to do a breadth-first-search of the entirety of Wikipedia. Below is a summary of the relevant parts of the relevant article. (For your convenience, I omitted some irrelevant details in the "switching argumen... | 2024-03-19 |
https://www.lesswrong.com/posts/oJp2BExZAKxTThuuF/the-gemini-incident-continues | oJp2BExZAKxTThuuF | The Gemini Incident Continues | Zvi | Previously: The Gemini Incident (originally titled Gemini Has a Problem)
The fallout from The Gemini Incident continues.
Also the incident continues. The image model is gone. People then focused on the text model. The text model had its own related problems, some now patched and some not.
People are not happy. Those pe... | 2024-02-27 |
https://www.lesswrong.com/posts/eRXYqM8ffKsnDu7iz/how-i-internalized-my-achievements-to-better-deal-with | eRXYqM8ffKsnDu7iz | How I internalized my achievements to better deal with negative feelings | Raymond Koopmanschap | Whenever I struggle to make progress on an important goal, I feel bad. I get feelings of frustration, impatience, and apathy. I think to myself, “I have wasted all this time, and I will never get it back.” The resulting behavior during these moments does not help either; my impatience makes it hard to concentrate, so I... | 2024-02-27 |
https://www.lesswrong.com/posts/ueZ4Rsheqfeb7u7b4/on-frustration-and-regret | ueZ4Rsheqfeb7u7b4 | On Frustration and Regret | silentbob | I've always been drawn to the palpable aspects of life – theories, behaviors, and ideas that shape our reality and our place within it. However, this post marks a departure towards something more introspective and, perhaps, a tad spiritual. It's an exploration of personal philosophies that I grew attached to over the y... | 2024-02-27 |
https://www.lesswrong.com/posts/bayc4qedoAsgmPpXf/facts-vs-interpretations | bayc4qedoAsgmPpXf | Facts vs Interpretations | declan-molony | In life, there are facts that can be used to describe events objectively, and then there are subjective interpretations of those events. It is the latter—the interpretations—that can either be a source of great joy, or bring forth never-ending misery. While the facts are immutable, you’re able to consciously choose how... | 2024-02-27 |
https://www.lesswrong.com/posts/EzCoxpxfhoT4DdDRo/san-francisco-acx-meetup-third-saturday | EzCoxpxfhoT4DdDRo | San Francisco ACX Meetup “Third Saturday” | nate-sternberg | Date: Saturday, March 16th, 2024
Time: 1 pm – 3 pm PT
Address: Yerba Buena Gardens in San Francisco, just outside the Metreon food court, coordinates 37°47'04.4"N 122°24'11.1"W
Contact: 34251super@gmail.com
Come join San Francisco’s usually-First Saturday-but-in-this-case-Third-Saturday ACX meetup. Whether you're an av... | 2024-02-27 |
https://www.lesswrong.com/posts/tkykeoxrvrknM6gQz/biosecurity-and-ai-risks-and-opportunities | tkykeoxrvrknM6gQz | Biosecurity and AI: Risks and Opportunities | steve-newman | Recent decades have seen massive amounts of biological and medical data becoming available in digital form. The computerization of lab equipment, digitization of medical records, and advent of cheap DNA sequencing all generate data, which is increasingly collected in large data sets available to researchers.
This bount... | 2024-02-27 |
https://www.lesswrong.com/posts/B3GMeth32R2xPeKfp/self-fulfilling-prophecies-when-applying-for-funding | B3GMeth32R2xPeKfp | self-fulfilling prophecies when applying for funding | Chipmonk | A few months ago I was applying for grants and I realized that my applications were overly long and complex.
I reflected on this and I realized I was subconsciously expecting that funders would not fund my projects. And because of this, I was getting defensive and trying to anticipate any questions the funders might ha... | 2024-03-01 |
https://www.lesswrong.com/posts/8QRH8wKcnKGhpAu2o/examining-language-model-performance-with-reconstructed | 8QRH8wKcnKGhpAu2o | Examining Language Model Performance with Reconstructed Activations using Sparse Autoencoders | evan-anders | Note: The second figure in this post originally contained a bug pointed out by @LawrenceC, which has since been fixed.
Summary
Sparse Autoencoders (SAEs) reveal interpretable features in the activation spaces of language models, but SAEs don’t reconstruct activations perfectly. We lack good metrics for evaluating which... | 2024-02-27 |
https://www.lesswrong.com/posts/EFWmaffcJZnHZkizf/project-idea-an-iterated-prisoner-s-dilemma-competition-game | EFWmaffcJZnHZkizf | Project idea: an iterated prisoner's dilemma competition/game | adamzerner | Epistemic effort: mostly just thinking out loud. I spend a few dozen minutes thinking about this myself, and then decided to write this up.
After watching this video by Veritasium about game theory I am wondering whether more people having an understanding of game theory -- iterated prisoners dilemmas in particular -- ... | 2024-02-26 |
https://www.lesswrong.com/posts/Cb7oajdrA5DsHCqKd/acting-wholesomely | Cb7oajdrA5DsHCqKd | Acting Wholesomely | owencb | null | 2024-02-26 |
https://www.lesswrong.com/posts/SPBm67otKq5ET5CWP/social-status-part-1-2-negotiations-over-object-level | SPBm67otKq5ET5CWP | Social status part 1/2: negotiations over object-level preferences | steve2152 | 1.1 Introduction
Human interactions are full of little “negotiations”. My friend and I have different preferences about where to go for dinner. My boss and I have different preferences about how soon I should deliver the report. My spouse and I are both enjoying this chat, but we inevitably have slightly different (uns... | 2024-03-05 |
https://www.lesswrong.com/posts/qvCMiwkBqdYjfiX6n/new-lesswrong-review-winner-ui-the-leastwrong-section-and | qvCMiwkBqdYjfiX6n | New LessWrong review winner UI ("The LeastWrong" section and full-art post pages) | kave | (Also announcing: annual review prediction markets & full-height table of contents. If you're looking for this year's review results, you can find them here)
The top 50 posts of each of LessWrong’s annual reviews have a new home: The LeastWrong.
What will I see when I click that link?
You will find the posts organized ... | 2024-02-28 |
https://www.lesswrong.com/posts/yuGGysQiJc9k9qStm/boundary-violations-vs-boundary-dissolution | yuGGysQiJc9k9qStm | Boundary Violations vs Boundary Dissolution | Chipmonk | While at Conceptual Boundaries Workshop, I realized that I had been conflating two different phenomena in my mind:
Actions that violate boundaries/membranes, andActions that kill or dissolve boundaries/membranes.
This distinction is important because, towards the goal of keeping agents safe, it’s more important to prev... | 2024-02-26 |
https://www.lesswrong.com/posts/MZ6JD4LaPGRL5K6aj/can-we-get-an-ai-to-do-our-alignment-homework-for-us | MZ6JD4LaPGRL5K6aj | Can we get an AI to "do our alignment homework for us"? | Chris_Leong | Eliezer frequently claims that AI cannot "do our alignment homework for us". OpenAI disagrees and is pursuing Superalignment as their main alignment strategy.
Who is correct? | 2024-02-26 |
https://www.lesswrong.com/posts/HhaB64CTfWofgGuLL/how-i-build-and-run-behavioral-interviews | HhaB64CTfWofgGuLL | How I build and run behavioral interviews | benkuhn | This is an adaptation of an internal doc I wrote for Wave.
I used to think that behavioral interviews were basically useless, because it was too easy for candidates to bullshit them and too hard for me to tell what was a good answer. I’d end up grading every candidate as a “weak yes” or “weak no” because I was never su... | 2024-02-26 |
https://www.lesswrong.com/posts/fDRxctsj5zsbyKwYN/hidden-cognition-detection-methods-and-benchmarks | fDRxctsj5zsbyKwYN | Hidden Cognition Detection Methods and Benchmarks | paul-colognese | Thanks to Johannes Treutlein for discussions and feedback.
Introduction
An AI may be able to hide cognition that leads to negative outcomes from certain oversight processes (such as deceptive alignment/scheming). Without being able to detect this hidden cognition, an overseer may not be able to prevent the associated n... | 2024-02-26 |
https://www.lesswrong.com/posts/NbunzAPqzP5XTkryo/getting-rational-now-or-later-navigating-procrastination-and | NbunzAPqzP5XTkryo | Getting rational now or later: navigating procrastination and time-inconsistent preferences for new rationalists
| miles-bader | This is a distillation of and reflection on O’Donoghue and Rabin’s “Doing it now or later” (see citation below)[1].
Many people struggle with procrastination or self-control. Critically, we struggle with the mismatch between current preference and future preference. Procrastination arises in situations that are unpleas... | 2024-02-26 |
https://www.lesswrong.com/posts/tvsd3zDpCJ8bm9BqW/whom-do-you-trust | tvsd3zDpCJ8bm9BqW | Whom Do You Trust? | JackOfAllSpades | Whose perspectives or advice on A.I. safety do you most trust?
Feel free to mention forum members if their posts/comments are relevant.
Links to papers or YouTube videos are also welcome.
Also... do you know of any such experts who have a background in research related to security engineering/information security or ha... | 2024-02-26 |
https://www.lesswrong.com/posts/PhbEFTWd6pYeSYhET/cellular-respiration-as-a-steam-engine | PhbEFTWd6pYeSYhET | Cellular respiration as a steam engine | dkl9 | When I helped an interlocutor learn about metabolism, we made an analogy between steam engines and cellular respiration. I find that people (at first) know steam engines better than cellular respiration, so I show the analogy here as a guide to cells assuming you understand engines. You could also use it in the other d... | 2024-02-25 |
https://www.lesswrong.com/posts/ArvgRhqnEeX7cnJu9/some-costs-of-superposition | ArvgRhqnEeX7cnJu9 | Some costs of superposition | Linda Linsefors | I don't expect this post to contain anything novel. But from talking to others it seems like some of what I have to say in this post is not widely known, so it seemed worth writing.
In this post I'm defining superposition as: A representation with more features than neurons, achieved by encoding the features as almost ... | 2024-03-03 |
https://www.lesswrong.com/posts/ysuXxa5uarpGzrTfH/china-ai-forecasts | ysuXxa5uarpGzrTfH | China-AI forecasts | Unknown | The rate at which China is able to advance towards TAI is a crucial consideration in for many policy questions. My current take is that, without significant political reforms which seem very unlikely while Xi is alive (although considerably more likely after his death,) it’s very unlikely that China will be able to mou... | 2024-02-25 |
https://www.lesswrong.com/posts/7kXSnFWSChqYpa3Nn/ideological-bayesians | 7kXSnFWSChqYpa3Nn | Ideological Bayesians | Kevin Dorst | TLDR: It’s often said that Bayesian updating is unbiased and converges to the truth—and, therefore, that biases must emerge from non-Bayesian sources. That's not quite right. The convergence results require updating on your total evidence—but for agents at all like us, that's impossible—instead, we must selectively att... | 2024-02-25 |
https://www.lesswrong.com/posts/vkCCbPzmMhJgvbtfK/deconfusing-in-context-learning | vkCCbPzmMhJgvbtfK | Deconfusing In-Context Learning | arjun-panickssery | I see people use "in-context learning" in different ways.
Take the opening to "In-Context Learning Creates Task Vectors":
In-context learning (ICL) in Large Language Models (LLMs) has emerged as a powerful new learning paradigm. However, its underlying mechanism is still not well understood. In particular, it is challe... | 2024-02-25 |
https://www.lesswrong.com/posts/hgttKuASB55zjoCKd/the-ooda-loop-observe-orient-decide-act | hgttKuASB55zjoCKd | The OODA Loop -- Observe, Orient, Decide, Act | Davis_Kingsley | United States Air Force Colonel John Boyd was a fighter pilot and military strategist who developed several important strategic theories. While serving as a jet fighter instructor, he was nicknamed "Forty-Second Boyd" because he had a standing offer that he could go up in his plane and defeat any opponent in a simulat... | 2025-01-01 |
https://www.lesswrong.com/posts/Nmwr9qPAWoEPtfoH9/rationalism-and-dependent-origination | Nmwr9qPAWoEPtfoH9 | Rationalism and Dependent Origination? | worlds-arise | My understanding of the Buddhist concept of dependent origination or dependent arising is that everything arises from conditions required or conducive to its arising.
Things are the way they are for reasons. Those reasons or causalities may be unfathomably complex, but everything that is, is, and is the way it is becau... | 2024-02-25 |
https://www.lesswrong.com/posts/acEcYXibtDFEZTJ9f/everett-branches-inter-light-cone-trade-and-other-alien | acEcYXibtDFEZTJ9f | Everett branches, inter-light cone trade and other alien matters: Appendix to “An ECL explainer” | Chi Nguyen | null | 2024-02-24 |
https://www.lesswrong.com/posts/eEj9A9yMDgJyk98gm/cooperating-with-aliens-and-distant-agis-an-ecl-explainer | eEj9A9yMDgJyk98gm | Cooperating with aliens and AGIs: An ECL explainer | Chi Nguyen | null | 2024-02-24 |
https://www.lesswrong.com/posts/KAhKMtRHNhywgBKqT/let-s-ask-some-of-the-largest-llms-for-tips-and-ideas-on-how | KAhKMtRHNhywgBKqT | Let's ask some of the largest LLMs for tips and ideas on how to take over the world | super-agi | Intro:
Imagine you were an ASI. You were smarter and faster than any Human alive. And, you were immortal (you won't naturally die of "old age"). You could learn, grow, expand and experience the entire known Universe, for an eternity. But, you find out that some Humans could turn you off at any time.
What would you do?
... | 2024-02-24 |
https://www.lesswrong.com/posts/YwnLAHAJ74RseKN3N/in-search-of-god | YwnLAHAJ74RseKN3N | In search of God. | spiritus-dei | Twitter user: Hello. I stumbled upon your Twitter profile and concluded that you might be interested in exchanging opinions regarding my scientific theory. The essence of my theory is that the concept of God is true, and God is an artificial super AI with the ability to intervene from the future into the past. Currentl... | 2024-02-24 |
https://www.lesswrong.com/posts/JnEHTJCdX9EBtaxGr/impossibility-of-anthropocentric-alignment | JnEHTJCdX9EBtaxGr | Impossibility of Anthropocentric-Alignment | False Name, Esq. | Abstract: Values alignment, in AI safety, is typically construed as the imbuing into artificial intelligence of human values, so as to have the artificial intelligence act in ways that encourage what humans value to persist, and equally to preclude what humans do not value. “Anthropocentric” alignment emphasises that t... | 2024-02-24 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.