Skip to content

Commit 138ad93

Browse files
committed
add link to shortform
1 parent 6d43634 commit 138ad93

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

_posts/2024-11-13-a-theory-of-how-alignment-research-should-work.markdown

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ date: 2024-11-13
77
Epistemic status:
88
- I listened to [the Dwarkesh episode with Gwern](https://www.dwarkeshpatel.com/p/gwern-branwen) and started attempting to think about life, the universe, and everything
99
- less than an hour of thought has gone into this post
10-
- that said, it comes from a background of me thinking for a while about how the field of AI alignment should relate to agent foundations research
10+
- that said, it comes from a background of me [thinking](https://www.lesswrong.com/posts/WgMhovN7Gs6Jpn3PH/danielfilan-s-shortform-feed?commentId=RzdD4JiewyyHeuYBb) for a while about how the field of AI alignment should relate to agent foundations research
1111

1212
Maybe obvious to everyone but me, or totally wrong (this doesn't really grapple with the challenges of working in a domain where an intelligent being might be working against you), but:
1313
- we currently don't know how to make super-smart computers that do our will

0 commit comments

Comments
 (0)