WARNING: Blockbook is currently in the state of heavy development. We may implement at any time backwards incompatible changes that require full reindexation of the database. Also, do not expect this documentation to be always up to date.
Blockbook is back-end service for Trezor wallet. Main features of Blockbook are:
- index of addresses and address balances of the connected block chain
- fast searches in the indexes
- simple blockchain explorer
- websocket, API and legacy Bitcore Insight compatible socket.io interfaces
- support of multiple coins (Bitcoin and Ethereum type), with easy extensibility for other coins
- scripts for easy creation of debian packages for backend and blockbook
Officially supported platform is Debian Linux and AMD64 architecture.
Memory and disk requirements for initial synchronization of Bitcoin mainnet are around 32 GB RAM and over 160 GB of disk space. After initial synchronization, fully synchronized instance uses about 10 GB RAM. Other coins should have lower requirements, depending on the size of their block chain. Note that fast SSD disks are highly recommended.
User installation guide is here.
Developer build guide is here.
Contribution guide is here.
Blockbook currently supports over 20 coins, among them:
- Bitcoin, Litecoin, Bitcoin Cash, Bgold, ZCash, Dash, Ethereum, Ethereum Classic
Testnets for some coins are also supported, for example:
- Bitcoin Testnet, Bitcoin Cash Testnet, ZCash Testnet, Ethereum Testnet Ropsten
List of all implemented coins is in the registry of ports.
How to reduce memory footprint of the initial sync:
- disable rocksdb cache by parameter
-dbcache=0
, the default size is 500MB - run blockbook with parameter
-workers=1
. This disables bulk import mode, which caches a lot of data in memory (not in rocksdb cache). It will run about twice as slowly but especially for smaller blockchains it is no problem at all.
Please add your experience to this issue.
Blockbook was killed during the initial import, most commonly by OOM killer. By default, Blockbook performs the initial import in bulk import mode, which for performance reasons does not store all the data immediately to the database. If Blockbook is killed during this phase, the database is left in an inconsistent state.
See above how to reduce the memory footprint, delete the database files and run the import again.
Check this or this issue for more info.
This issue discusses how to run Blockbook on Ubuntu. If you have some additional experience with Blockbook on Ubuntu, please add it to this issue.
Your coin's block/transaction data may not be compatible with BitcoinParser
ParseBlock
/ParseTx
, which is used by default. In that case, implement your coin in a similar way we used in case of zcash and some other coins. The principle is not to parse the block/transaction data in Blockbook but instead to get parsed transactions as json from the backend.
Blockbook stores data the key-value store RocksDB. Database format is described here.
Blockbook API is described here.