Project site: https://dunchead.github.io/ai-safety/
The AI Safety Project aims to map landscape of AI risk and understand the challenges we must overcome as a global community.
The field of AI development is moving at such a pace that there's a real chance we may fall victim to risks before we fully understand them, or are even aware of them. This is probably already happening.
Community-driven initiatives are essential to keep up with this rapidly evolving landscape. The goals of this project are:
- Create a simple framework for understanding the landscape of AI risk and the main challenges we need to overcome
- Share key resources and potential solutions
- Provide a space for community collaboration that is not driven by economic incentives
This is a collaborative project and a work in progress. We encourage contributions for any of the following:
- Additional risks/challenges
- Suggested solutions
- Key links/resources
Feel free to submit your own ideas.
Bugfixes/improvements for the website itself are also welcome.
Contributions can be made in two ways:
- As posts in the discusssion forum under 'AI risks, challenges and solutions'
- As pull requests (most content is in
script.js
)
This is a high-level map, not an encyclopedia.
The goal is to create a clear outline of the major issues with links to only the most useful resources for further reading; we're not trying to compile a detailed or comprehensive list. Too much information is part of the problem we're trying to solve.
Yes please:
✅ Key points not already covered, concisely written
✅ Links/resources that are especially clear, original or significant
✅ Restructuring that improves the content while maintaining the clarity of design
No thanks:
❌ Verbose or repetitive text
❌ Links/resources that do not offer much additional value
❌ Non-essential restructuring of the design/content