Abundance Roadmap

Research and development of the protocol. Developing a comprehensive incentive structure that aligns user interests. Identifying and redteaming potential attack vectors. Developing mechanisms to address sybil attacks, collusion, bad actors gaming protocol, and so on.

See: Abundance Protocol Whitepaper

Development - Smart contract logic and data flow development. User interface buildout

See: GitHub repo

Live Testnet - Smart contract performance testing and auditing

Mainnet Alpha - live testing of protocol & economic modeling

Mainnet Beta - battle testing of protocol

Mainnet Launch - completion of first development phase. Continual improvement of the protocol

Empowering Communities - supporting projects, DAOs and local communities in launching their own Abundance Ecosystem for funding common goods

Multi-chain + L1 Launch - One of Abundance's superpowers is the ability to create alignment between competing communities (projects, blockchains, etc.) In a Scarcity Paradigm, groups compete over resources and speculate over coin prices (regardless of the underlying value of the project).

Abundance creates a new dynamic. It rewards everyone based on their contribution to a project. This makes people focus on creating the most impact in the broader ecosystem – which means creating real value, efficiently distributing resources and building cross-project collaborations.

This is achieved by running the protocol on every blockchain and on its own dedicated chain, so all the different projects and ecosystems can work together smoothly.

Read more: Abundance Roadmap: Everywhere All at Once

AI-based Consensus Mechanism - over time more and more parts of the validation process can be automated/powered by AI. Though initially these will just assist validators, at some stage AI can be directly integrated into the protocol until AI replaces manual reviewers entirely (the AI will be trained on the inputs from validators until it is sufficiently effective at replacing them).

No user will be expected to validate the entire AI process - that is of course impossible. The process will work based on Validation by Sampling, where the entire AI response is chunked into verifiable subsets, and each subset can be verified by users.

Since a verified process doesn't ensure that the AI's output is exactly the project's economic impact, only that the system correctly produced a result based on the input and training it received, the user-based challenge period will remain.