text stringlengths 50 248k | author stringlengths 2 43 | id stringlengths 9 17 | title stringlengths 0 173 | source stringclasses 4
values |
|---|---|---|---|---|
**Pacts against coordinating meanness.**
I just re-read Scott Alexander's [Be Nice, At Least Until You Can Coordinate Meanness](https://slatestarcodex.com/2016/05/02/be-nice-at-least-until-you-can-coordinate-meanness/), in which he argues that a necessary (but not sufficient) condition on restricting people's freedom ... | Eric Neyman | cGbgzL8gXRpYcGGdF | redwood_conversation | |
I’m weighing my career options, and the two issues that seem most important to me are factory farming and preventing misuse/s-risks from AI. Working for a lab-grown meat startup seems like a very high-impact line of work that could also be technically interesting. I think I would enjoy that career a lot.
However, I... | LWLW | RdJBvbAnnkuAr37Kr | redwood_conversation | |
RLVF was likely[^jcbf3hf983i] applied before preference training in [Qwen3](https://arxiv.org/abs/2505.09388), [Kimi K-2](https://arxiv.org/pdf/2507.20534) and [GLM-4.5](https://arxiv.org/pdf/2508.06471).
This could be bad for two reasons:
1. Extensive RL training could define goals early on and lead to goal-cry... | Cam | JdmLsW5tzeKNwZi7u | redwood_conversation | |
I haven't noticed anyone else come out and say it here, and I may express this more rigorously later, but, like, GPT-5 is a pretty big bear signal, right? Not just in terms of benchmarks suggesting a non-superexponential trend but also, to name a few angles/anecdata:
* It did slightly worse than o3 at the first thin... | JustisMills | u3dJGv3hp7mW2ZiAc | redwood_conversation | |
Mind Crime might flip the sign of the future.
If future contains high tech, underregulated compute and diverse individuals, then it's likely it will contain incredibly high amounts of most horrendous torture / suffering / despair / abuse.
It's entirely possible you can have 10^15 human-years per second of real time o... | Canaletto | FvEnDF7Cv9kkkqzgJ | redwood_conversation | |
### Edit: Full post [here](https://www.lesswrong.com/posts/6KcP7tEe5hgvHbrSF/metr-how-does-time-horizon-vary-across-domains) with 9 domains and updated conclusions!
### Cross-domain time horizon:
We know AI time horizons (human time-to-complete at which a model has a 50% success rate) on software tasks are currently... | Thomas Kwa | KaaSfntGEBBgadvrF | redwood_conversation | |
Are there any high p(doom) orgs who are focused on the following:
1. Pick an alignment "plan" from a frontier lab (or org like AISI)
2. Demonstrate how the plan breaks or doesn't work
3. Present this clearly and legibly for policymakers
Seems like this is a good way for people to deploy technical talent in a way w... | J Bostock | wxoBqzLwe2M7KaiGN | redwood_conversation | |
I'm getting typo reacts (not suprsing) but I can't see where typo is when hovering over the reaction. What am I doing wrong? | Linda Linsefors | BmZ9843kCdxdaoLKh | redwood_conversation | |
*Epistemic status: Probably a terrible idea, but fun to think about, so I'm writing my thoughts down as I go.*
Here's a whimsical simple AGI governance proposal: "Cull the GPUs." I think of it as a baseline that other governance proposals should compare themselves to and beat.
The context in which we might ne... | Daniel Kokotajlo | GcZEfBqvawEQvCWbj | redwood_conversation | |
iiuc, Anthropic's plan for *averting misalignment risk* is [*bouncing off bumpers*](https://www.alignmentforum.org/posts/HXJXPjzWyS5aAoRCw/putting-up-bumpers) *like* [*alignment audits*](https://www.anthropic.com/research/auditing-hidden-objectives).[^yzqhjhbolg] This doesn't make much sense to me.
1. I of course buy... | Zach Stein-Perlman | yv7nFBEcTDbMeJbZM | redwood_conversation | |
There is some conceptual misleadingness with the usual ways of framing algorithmic progress. Imagine that in 2022 the number of apples produced on some farm increased 10x year-over-year, then in 2023 the number of oranges increased 10x, and then in 2024 the number of pears increased 10x. That doesn't mean that the numb... | Vladimir_Nesov | cS7s9gcMMmDmwdynP | redwood_conversation | |
It's instrumentally useful for early AGIs to Pause development of superintelligence for the same reasons as it is for humans. Thus preliminary work on policy tools for Pausing unfettered RSI is also something early AGIs could be aimed at, [even if it's only half-baked ideas available on the eve of potential takeoff](ht... | Vladimir_Nesov | sdCcxndCqXghfnFgc | redwood_conversation | |
A few months ago, someone here [suggested](https://www.lesswrong.com/posts/6dgCf92YAMFLM655S/) that more x-risk advocacy should go through comedians and podcasts.
Youtube just recommended this Joe Rogan clip to me from a few days ago: [The Worst Case Scenario for AI](https://www.youtube.com/watch?v=bjnUJq5OONM). Joe R... | Ebenezer Dukakis | 3pxNsmpkBRgRhgqJR | redwood_conversation | |
random brainstorming ideas for things the ideal sane discourse encouraging social media platform would have:
* have an LM look at the comment you're writing and real time give feedback on things like "are you sure you want to say that? people will interpret that as an attack and become more defensive, so your point ... | leogao | iKjeNL3B3Lk6pTeJM | redwood_conversation | |
Everyone says flirting is about a "dance of ambiguous escalation", in which both people send progressively more aggressive/obvious hints of sexual intent in conversation.
But, like... I don't think I have ever noticed two people actually do this? Is it a thing which people actually do, or one of those things which lik... | johnswentworth | CPaAHMDgJAJCeHbiL | redwood_conversation | |
**The motte and bailey of transhumanism**
Most people on LW, and even most people in the US, are in favor of disease eradication, radical life extension, reduction of pain and suffering. A significant proportion (although likely a minority) are in favor of embryo selection or gene editing to increase intelligence ... | Nina Panickssery | ggbGxojXCdTAAhnBo | redwood_conversation | |
I expect to refer back to [this comment](https://www.lesswrong.com/posts/9PiyWjoe9tajReF7v/the-hidden-cost-of-our-lies-to-ai?commentId=zH9NBD5TeoyCDa5PE) a lot. I'm reproducing it here for visibility.
**Basic idea / spirit of the proposal**
We should credibly promise to treat certain advanced AIs of ours well, as som... | Daniel Kokotajlo | y4mLnpAvbcBbW4psB | redwood_conversation | |
What is the operation with money that represents destruction of value?
======================================================================
[Money is a good approximation for what people value](https://www.lesswrong.com/posts/ZpDnRCeef2CLEFeKM/money-the-unit-of-caring). Value can be destroyed. But what should I do t... | Roman Malov | 5fJrQsJ7mWhpvyCjk | redwood_conversation | |
I just recently ran into [someone posting this on Twitter](https://x.com/gtdad/status/1931412032554758423) and it blew my mind:
> An intriguing feature of twin studies: anything a parent does to individualize for a child is non-shared-environment (NSE) rather than shared environment (SE, viz. ”parenting”). The more a ... | Kaj_Sotala | q3eTv6dkegqXe7Tv5 | redwood_conversation | |
Can LLMs Doublespeak?
[Doublespeak](https://en.wikipedia.org/wiki/Doublespeak) is the deliberate distortion of words' meaning, particularly to convey different meanings to different audiences or in different contexts. In [Preventing Language Models From Hiding Their Reasoning](https://www.lesswrong.com/posts/9Fdd9N7Es... | Max Niederman | T8T9ypkpcRfHYJPoT | redwood_conversation | |
**Why red-team models in unrealistic environments?**
Following on our [Agentic Misalignment](https://www.lesswrong.com/posts/b8eeCGe3FWzHKbePF/agentic-misalignment-how-llms-could-be-insider-threats-1) work, I think it's worth spelling out a bit more why we do work like this, especially given complaints like [the ones ... | evhub | auzYDb3JeZEEkpxB3 | redwood_conversation | |
I've heard people say we should deprioritise fundamental & mechanistic interpretability[^81m5j4o0rp5] in short-timelines (automated AI R&D) worlds. This seems not obvious to me.
The usual argument is
1. Fundamental interpretability will take many years or decades until we "solve interpretability" and the research be... | StefanHex | e7P49wKqbq6ZbCpw2 | redwood_conversation | |
In his [TED talk](https://www.ted.com/talks/eliezer_yudkowsky_will_superintelligent_ai_end_the_world), Eliezer guesses superintelligence will arrive after "zero to two more breakthroughs the size of transformers." I've heard others voice similar takes. But I haven't heard much discussion of which number it is.[^f8wfxiy... | Eli Tyre | gCNMAhQGRNcKyECLj | redwood_conversation | |
There's a counterargument to the AGI hype that basically says - of course the labs would want to hype this technology, they make money that way, just because they say they believe in short timelines doesn't mean it's true. Specifically, the claim here is not that the AI lab CEO's are mistaken, but rather that they are ... | Jay Bailey | kCtCnMw2xGhYNJEkc | redwood_conversation | |
Nobody at Anthropic can point to a credible technical plan for actually controlling a generally superhuman model. If it’s smarter than you, knows about its situation, and can reason about the people training it, this is a zero-shot regime.
The world, including Anthropic, is acting as if "surely, we’ll figure something... | Mikhail Samin | wnKcfvr4WNqQhtCZr | redwood_conversation | |
I've been noticing a bunch of people confused about how the terms _alignment faking_, _deceptive alignment_, and _gradient hacking_ relate to each other, so I figured I would try to clarify how I use each of them. [Deceptive alignment](https://www.lesswrong.com/s/r9tYkB2a8Fp4DN8yB) and [gradient hacking](https://www.le... | evhub | eQr5F2k983ACavFNj | redwood_conversation | |
Diffusion language models are probably bad for alignment and safety because there isn't a clear way to get a (faithful) Chain-of-Thought from them. Even if you can get them to generate something that looks like a CoT, compared with autoregressive LMs, there is even less reason to believe that this CoT is load-bearing a... | peterbarnett | uhpPbBDrGgthhcR9g | redwood_conversation | |
Here is a broad sketch of how I'd like AI governance to go. I've written this in the form of a "plan" but it's not really a sequential plan, more like a list of the most important things to promote.
1. Identify mechanisms by which the US government could exert control over the most advanced AI systems without strongl... | Richard_Ngo | erWmikKyxCvHv5jcx | redwood_conversation | |
It looks like OpenAI has biased ChatGPT against using the word "sycophancy."
Today, I sent ChatGPT the prompt "what are the most well-known sorts of reward hacking in LLMs". I noticed that the first item in its response was ["Sybil Prompting"](https://chatgpt.com/share/6836a5b8-78e8-8010-a049-0b47c190652c). I'd never ... | Caleb Biddulph | 4DPka5XhZY4PG8n6b | redwood_conversation | |
Can an LLM tell when the input for its assistant does not match the output tokens it would have actually produced? This sort of "putting words in the LLM's mouth" is very common in papers and it feels like something the LLM would be able to notice. Could this enable the LLM to realize when it is being trained? Is there... | Florian_Dietz | AtDWAXurx7KiFaN4e | redwood_conversation | |
Pick two: Agentic, moral, doesn't attempt to use command-line tools to whistleblow when it thinks you're doing something egregiously immoral.
You cannot have all three.
This applies just as much to humans as it does to Claude 4. | jimrandomh | hA8KK9ufuQvghpekN | redwood_conversation | |
Suggested reframing for judging AGI lab leaders: **think less about what terminal values AGI lab leaders pursue and think more about how they trade-off power/instrumental goals with other values**.
Claim 1: The end values of AGI lab leaders matter mostly if they win the AGI race and have crushed competition, but much... | simeon_c | FJ3NkD4QbSjkvZFia | redwood_conversation | |
I claim it is a lot more reasonable to use the reference class of "people claiming the end of the world" than "more powerful intelligences emerging and competing with less intelligent beings" when thinking about AI x-risk. further, we should not try to convince people to adopt the latter reference class - this sets off... | leogao | 5E4Smr6tPrshzhHKH | redwood_conversation | |
One thing that confused me about transformers is the question of when (as in, **after how many layers**) each **embedding "flips"** from representing the **original token** to finally representing the **prediction of the next token**.
By now, I think the answer is simply this: **each embedding represents both at the s... | silentbob | hTHcExyLGfvox5QkG | redwood_conversation | |
One large part of the AI 2027 piece is contigent on the inability of nation state actors to steal model weights. The authors' take is that while China is going to be able to steal trade secrets, they aren't going to be able to directly pilfer the model weights more than once or twice. I haven't studied the topic as dee... | lc | v23756Eju6so3JDPY | redwood_conversation | |
What can ordinary people do to reduce AI risk? People who don't have expertise in AI research / decision theory / policy / etc.
Some ideas:
- Donate to orgs that are working to AI risk (which ones, though?)
- Write letters to policy-makers expressing your concerns
- Be public about your concerns. Normalize caring abo... | MichaelDickens | d7QwrbZvyKTabxozw | redwood_conversation | |
Here's what I'd consider some comparatively important high-level criticisms I have of AI-2027, that I am at least able to articulate reasonably well without too much effort.
*1*
At some point, I believe Agent-4, the AI created by OpenBrain starts to be causally connected over time. That is, unlike current AIs that a... | 1a3orn | EJZFG8m6ETWgZRFJi | redwood_conversation | |
Limiting China's computing power via export controls on hardware like GPUs might be accelerating global progress in AI capabilities.
When Chinese labs are compute-starved, their research will differentially focus on efficiency gains compared to counterfactual universes where they are less limited. So far, they've been... | Archimedes | aq7mtzkudMsKzmyrC | redwood_conversation | |
**Edit:** I've played with the numbers a bit more, and on reflection, I'm inclined to partially unroll this update. o3 doesn't break the trendline *as* much as I'd thought, and in fact, it's basically on-trend if we remove the GPT-2 and GPT-3 data-points ([which I consider particularly dubious](https://www.lesswrong.co... | Thane Ruthenis | zvk3FbeMYX6zRyrFa | redwood_conversation | |
Quick thought: If you have an aligned AI in a multipolar scenario, other AIs might threaten to cause S-risk in order to get said FAI to do stuff, or as blackmail. Therefore, we should make the FAI think of X-risk and S-risk as equally bad (even though S-risk is in reality terrifyingly worse), because that way other pow... | KvmanThinking | 9HHPYEesjwntSRLgB | redwood_conversation | |
execution is necessary for success, but direction is what sets apart merely impressive and truly great accomplishment. though being better at execution can make you better at direction, because it enables you to work on directions that others discard as impossible. | leogao | vcz3aAEpejiWS5rtt | redwood_conversation | |
Reassessing [heroic responsibility](https://www.lesswrong.com/w/heroic-responsibility), in light of subsequent events.
I think [@cousin_it](https://www.lesswrong.com/users/cousin_it?mention=user) [made a good point](https://www.lesswrong.com/posts/FTwdYvvAXkscpbDXo/r-hpmor-on-heroic-responsibility?commentId=6optXdGYPK... | Wei Dai | HoiYTfEdxahSAGYmL | redwood_conversation | |
Do we want to put out a letter for labs to consider signing, saying something like "if all other labs sign this letter then we'll stop"?
I heard lots of lab employees hope the other labs would slow down.
I'm not saying this is likely to work, but it seems easy and maybe we can try the easy thing? We might end up with... | Yonatan Cale | yGvS3T2RcboTrPF9K | redwood_conversation | |
Is it true that no one knows why Claude 3 Opus (but not other Claude models) has strong behavioral dispositions about animal welfare? | Eli Tyre | HkZ3Zb6pufYiRvvTH | redwood_conversation | |
The speed of scaling pretraining will go down ~3x in 2027-2029, reducing probability of crossing transformative capability thresholds per unit of time after that point, if they'd not been crossed yet by then.
GPT-4 was trained in 2022 at ~2e25 FLOPs, Grok-3 and GPT-4.5 were trained in 2024 at ~3e26 FLOPs (or twice tha... | Vladimir_Nesov | jyybb8x4fEKSq9nAR | redwood_conversation | |
Why so few third party auditors of algorithms? for instance, you could have an auditing agency make specific assertions about what the twitter algorithm is doing, whether the community notes is 'rigged'
* It could be that this is too large of a codebase, too many people can make changes, it's too hard to verify ... | Ben Goldhaber | 3jrKtKC4Gh2AQCcCA | redwood_conversation | |
Long reasoning training might fail to surpass pass@50-pass@400 capabilities of the base/instruct model. A [new paper](https://arxiv.org/abs/2504.13837) measured pass@k[^1] performance for models before and after RL training on verifiable tasks, and it turns out that the effect of training is to lift pass@k performance ... | Vladimir_Nesov | NxgZHyyQwTHji3Esx | redwood_conversation | |
A tricky thing about feedback on LW (or maybe just human nature or webforum nature):
* Post: Maybe there's a target out there let's all go look (50 points)
* Comments: so inspiring! We should all go look!
* Post: What "target" really means (100 points)
* Comments: I feel much less confused, thank you
* Post: I sho... | lemonhope | kquxNLKWSL2Cj2aYi | redwood_conversation | |
**Is Superhuman Persuasion a thing?**
Sometimes I see discussions of AI superintelligence developping superhuman persuasion and extraordinary political talent.
Here's some reasons to be skeptical of the existence of 'superhuman persuasion'.
1. We don't have definite examples of extraordinary political talent.
... | Alexander Gietelink Oldenziel | Kz6mTbvh79yJbLsg2 | redwood_conversation | |
Here's what seem like priorities to me after listening to the recent Dwarkesh podcast featuring Daniel Kokotajlo:
1\. Developing the safer AI tech (in contrast to modern generative AI) so that frontier labs have an alternative technology to switch to, so that it is lower cost for them to start taking warning signs... | abramdemski | jRQEarCKbxe65gLT7 | redwood_conversation | |
There was a type of guy circa 2021 that basically said that gpt-3 etc. was cool, but we should be cautious about assuming everything was going to change, because the context limitation was a key bottleneck that might never be overcome. That guy's take was briefly "discredited" in subsequent years when LLM companies inc... | lc | PX3rx5BR4mhZyS4Rs | redwood_conversation | |
Some versions of the [METR time horizon paper](https://www.lesswrong.com/posts/deesrjitvXM4xYGZd/metr-measuring-ai-ability-to-complete-long-tasks) from alternate universes:
**Measuring AI Ability to Take Over Small Countries (idea by Caleb Parikh)**
Abstract: Many are worried that AI will take over the world, but ext... | Thomas Kwa | noF8PubgWD5iFmEYo | redwood_conversation | |
Interesting anecdote on "von Neumann's onion" and his general style, from P. R. Halmos' [*The Legend of John von Neumann*](https://gwern.net/doc/math/1973-halmos.pdf):
> **Style**. As a writer of mathematics von Neumann was clear, but not clean; he was powerful but not elegant. He seemed to love fussy detail, needless... | Mo Putera | Fx89uRSAceFCP5nxH | redwood_conversation | |
I've noticed some common misconceptions that people have about [OpenAI's recent paper](https://arxiv.org/abs/2503.11926) on reward hacking and chain-of-thought (CoT) monitoring. I'll try to correct some of these below. To be clear, this isn't meant as criticism of the paper (which I quite enjoyed), just a correction of... | Sam Marks | xzLFeHNaxPKtHFKA8 | redwood_conversation | |
To what extent should we expect catastrophic failure from AI to mirror other engineering disasters/ have applicable lessons from Safety engineering as a field?
I would think that 1. everything is sui generis and 2. things often rhyme, but it is unclear how approaches will translate. | Mis-Understandings | noHc8hpG5sA6wRYkG | redwood_conversation | |
timelines takes
* i've become more skeptical of rsi over time. here's my current best guess at what happens as we automate ai research.
* for the next several years, ai will provide a bigger and bigger efficiency multiplier to the workflow of a human ai researcher.
* ai assistants will probably not u... | leogao | Rw3TGzgvSp7bFPy7Q | redwood_conversation | |
Pseudo-flat tax formula:
Assume utility is logarithmic in income, and the goal is to set the experienced tax burden to be constant.
Then, we have the formula that the average tax rate, where $a$ is a parameter controlling the experienced tax burden and $z$ is the break-even point, is as follows:
$$
f(x)=1-\left(\fra... | Jerdle | CuG4dDZaopp5BYXbF | redwood_conversation | |
[@Ryan Greenblatt](https://www.lesswrong.com/users/ryan-greenblatt?mention=user) I hereby request you articulate the thing you said to me earlier about the octopi breeding program! | Daniel Kokotajlo | MuqXzdiJK5guwPiCk | redwood_conversation | |
The *waiting room strategy* for people in undergrad/grad school who have <6 year median AGI timelines: treat school as "a place to be until you get into an actually impactful position". Try as hard as possible to get into an impactful position as soon as possible. As soon as you get in, you leave school.
Upsides compa... | Nikola Jurkovic | pjETRenKwdb4pBsbz | redwood_conversation | |
Ok, so it seems clear that we *are*, for better or worse, likely going to try to get AGI to do our alignment homework.
Who has thought through all the other homework we might give AGI that is as good of an idea, assuming a model that isn't an instant-game-over for us? E.G., I remember @Buck rattling off a list o... | davekasten | 32jReMrHDd5vkDBwt | redwood_conversation | |
[Sam Altman said in an interview](https://www.reddit.com/r/singularity/comments/1io1fj1/sam_explains_why_openai_is_integrating_the/):
> We want to bring GPT and o together, so we have one integrated model, the AGI. It does everything all together.
This statement, combined with [today's announcement that GPT-5 will in... | Nikola Jurkovic | 8RrTEZznnBgp8ATay | redwood_conversation | |
It's the first official day of the AI ~~Safety~~ Action Summit, and thus it's also the day that [the Seoul Commitments](https://www.gov.uk/government/publications/frontier-ai-safety-commitments-ai-seoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summit-2024) (made by sixteen companies last year to adopt an RSP... | tylerjohnston | zhQLgeAB5xJMrBP8p | redwood_conversation | |
At [a talk at UTokyo](https://youtu.be/8LmfkUb2uIY?si=-DQMMeswI2u49ku9&t=1067), Sam Altman said (clipped [here](https://x.com/tsarnick/status/1888111042301211084) and [here](https://x.com/tsarnick/status/1888114693472194573)):
* “We’re doing this new project called Stargate which has about 100 times the computing po... | Nikola Jurkovic | Yy8nHayF8vrWgjXpA | redwood_conversation | |
"Self-recognition" as a backdoor.
Assume that models can recognise data they generate vs data they do not generate, with high fidelity. This could probably be used as a contextual trigger for backdoor behaviour, e.g. writing insecure code.
I think a model organism along these lines might be interesting to develop, ... | Daniel Tan | o5etvk94iKwgFpym7 | redwood_conversation | |
Someone should see how good Deep Research is at generating reports on AI Safety content. | Chris_Leong | 7safRgmGoq7Q5BM7D | redwood_conversation | |
A MoE transformer can reach the same loss as a compute optimal dense model using 3x-6x less compute, but will need **the same amount of data** to do it. So compute optimal MoEs don't improve data efficiency, don't contribute to mitigating data scarcity.
A new [Jan 2025 paper](https://arxiv.org/abs/2501.12370) offers s... | Vladimir_Nesov | MHJRw7hwg2hLx8hyK | redwood_conversation | |
[Edit 2: faaaaaaast. https://x.com/jrysana/status/1902194419190706667 ]
[Edit: Please also see Nick's reply below for ways in which this framing lacks nuance and may be misleading if taken at face value.]
https://blogs.nvidia.com/blog/deepseek-r1-nim-microservice/
> The DeepSeek-R1 NIM microservice can deliver up to ... | Nathan Helm-Burger | 5j5WEvo45KheK2zr6 | redwood_conversation | |
I think a very common problem in alignment research today is that people focus almost exclusively on a specific story about strategic deception/scheming, and that story is a very narrow slice of the AI extinction probability mass. At some point I should probably write a proper post on this, but for now here are few off... | johnswentworth | 56JZRd8r6fRwFtHf5 | redwood_conversation | |
I recently gave a workshop in AI control, for which I created an exercise set.
The exercise set can be found here: [https://drive.google.com/file/d/1hmwnQ4qQiC5j19yYJ2wbeEjcHO2g4z-G/view?usp=sharing](https://drive.google.com/file/d/1hmwnQ4qQiC5j19yYJ2wbeEjcHO2g4z-G/view?usp=sharing)
The PDF is self-contained, but thr... | Olli Järviniemi | xkNq65SsyHzpCQJ4r | redwood_conversation | |
I'm very confused about current AI capabilities and I'm also very confused why other people aren't as confused as I am. I'd be grateful if anyone could clear up either of these confusions for me.
How is it that AI is seemingly superhuman on benchmarks, but also pretty useless?
For example:
- O3 scores higher on Front... | Cleo Nardo | dugenmwgnSsytkHjG | redwood_conversation | |
While reading [OpenAI Operator System Card](https://cdn.openai.com/operator_system_card.pdf), the following paragraph on page 5 seemed a bit weird:
> We found it fruitful to think in terms of misaligned actors, where:
> * the user might be misaligned (the user asks for a harmful task),
> * the model might be misaligne... | Dentosal | YzJvxrdtu3znuiJYj | redwood_conversation | |
How fast should the field of AI safety grow? An attempt at grounding this question in some predictions.
* [Ryan Greenblatt seems to think](https://www.alignmentforum.org/posts/tmWMuY5HCSNXXZ9oq/buck-s-shortform?commentId=prcvXQWfPyQTBwpdE) we can get a 30x speed-up in AI R&D using near-term, plausibly safe AI system... | Ryan Kidd | TPWqnwFBxJmbyaYjj | redwood_conversation | |
I do a lot of "mundane utility" work with chat LLMs in my job[^xo6ax5w8pw], and I find there is a disconnect between the pain points that are most obstructive to me and the kinds of problems that frontier LLM providers are solving with new releases.
I would pay a premium for an LLM API that was better tailored to my n... | nostalgebraist | DZ8vdktiZ8aeHmPcm | redwood_conversation | |
**"Protecting model weights" is aiming too low, I'd like labs to protect their intellectual property too.** Against state actors. This probably means doing engineering work inside an air gapped network, yes.
I feel it's outside the Overton Window to even suggest this and I'm not sure what to do about that except write... | Yonatan Cale | inA6PuFoHm5rizjBB | redwood_conversation | |
Collection of how-to guides
* Research soft skills
* [How to make research slides](https://www.lesswrong.com/posts/i3b9uQfjJjJkwZF4f/tips-on-empirical-research-slides) by James Chua and John Hughes
* [How to manage up](https://app.read.ai/analytics/meetings/01J41JWJHP2EYMYGQPQH0BN0P8?utm_source=Share_Nav... | Daniel Tan | 2veMNNw2x9Evy6ict | redwood_conversation | |
People are not thinking clearly about AI-accelerated AI research. [This comment by Thane Ruthenis](https://www.lesswrong.com/posts/auGYErf5QqiTihTsJ/?commentId=kNHivxhgGidnPXCop) is worth amplifying.
> I'm very skeptical of AI being on the brink of dramatically accelerating AI R&D.
>
> My current model is that ML ex... | Alexander Gietelink Oldenziel | nobEomqHnMnvbDFnW | redwood_conversation | |
**Research mistakes I made over the last 2 years.**
Listing these in part so that I hopefully learn from them, but also because I think some of these are common among junior researchers, so maybe it's helpful for someone else.
* I had an idea I liked and stayed attached to it too heavily.
* (The idea is using... | Erik Jenner | tgBZGssg5YBX8D5CT | redwood_conversation | |
Prover-verifier games as an alternative to AI control.
[AI control](https://arxiv.org/abs/2312.06942) has been suggested as a way of safely deploying highly capable models without the need for rigorous proof of alignment. This line of work is likely quite important in worlds where we do not expect to be able to fully... | Daniel Tan | 5Hdqgiv3YbNTqGGCP | redwood_conversation | |
I'd like to internally allocate social credit to people who publicly updated after the recent Redwood/Anthropic result, after previously believing that scheming behavior was very unlikely in the default course of events (or a similar belief that was decisively refuted by those empirical results).
Does anyone have link... | RobertM | s6wWvrAe7xh4Kufy9 | redwood_conversation | |
Working on anti spam/scam features at Google or banks could be a leveraged intervention on some worldviews. As AI advances it will be more difficult for most people to avoid getting scammed, and including really great protections into popular messaging platforms and banks could redistribute a lot of money from AIs to h... | Tao Lin | SsYcHvtEbde9FhhdQ | redwood_conversation | |
On o3: for what feels like the twentieth time this year, I see people freaking out, saying AGI is upon us, it's the end of knowledge work, timelines now clearly in single-digit years, etc, etc. I basically don't buy it, my low-confidence median guess is that o3 is massively overhyped. Major reasons:
* I've personall... | johnswentworth | szp5fNZJqpfrsyQP9 | redwood_conversation | |
Is the move of a lot of alignment discourse to Twitter/X a coordination failure or a positive development?
I'm kinda sad that LW seems less "alive" than it did a few years ago, but also seems healthy to be engaging in a more neutral space with a wider audience | Oliver Daniels | yg9GpjLw2jjCLYzhH | redwood_conversation | |
With o1, and now o3, It seems fairly plausible now that there will be a split between "verifiable" capabilities, and general capabilities. Sure, there will be some cross-pollination (transfer), but this might have some limits.
What then? Can a superhuman mathematical + Coding AI also just reason through political stra... | Ariel_ | tKDaKN7thPtzcmMou | redwood_conversation | |
Many expert level benchmarks totally overestimate the range and diversity of their experts' knowledge. A person with a PhD in physics is probably undergraduate level in many parts of physics that are not related to his/her research area, and sometimes we even see that within expert's domain (Neurologists usually forget... | Hopenope | 2vbFARqWHMp5wCNmu | redwood_conversation | |
**I wish someone ran a study finding what human performance on** [**SWE-bench**](https://www.swebench.com/#) **is.** There are ways to do this for around $20k: If you try to evaluate on 10% of SWE-bench (so around 200 problems), with around 1 hour spent per problem, that's around 200 hours of software engineer time. So... | Nikola Jurkovic | cnaLRYwFqYwKyiunG | redwood_conversation | |
x-posting a kinda rambling [thread](https://x.com/saprmarks/status/1868403534070518268) I wrote about [this blog post](https://www.tilderesearch.com/blog/sieve) from Tilde research.
\-\-\-
If true, this is the first known application of SAEs to a found-in-the-wild problem: using LLMs to generate fuzz tests that don't... | Sam Marks | 8NxJy6XczsLDDnGyi | redwood_conversation | |
**Reputation is lazily evaluated**
When evaluating the reputation of your organization, community, or project, many people flock to surveys in which you ask randomly selected people what they think of your thing, or what their attitudes towards your organization, community or project are.
If you do this, you will ve... | habryka | rzB5wieXwqqtZ4piw | redwood_conversation | |
I asked Rohin Shah what the debate agenda is about and he said (off the cuff, not necessarily worded well) (context omitted) (emphasis mine):
> Suppose you have any task where given an input x the AI has to produce some output y. (Can be question answering, arbitrary chatbot stuff, being an agent, whatever.)
>
> Deba... | Zach Stein-Perlman | DLYDeiumQPWv4pdZ4 | redwood_conversation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.