1. 07 Jan, 2022 2 commits
  2. 06 Jan, 2022 4 commits
  3. 05 Jan, 2022 7 commits
  4. 04 Jan, 2022 3 commits
  5. 26 Dec, 2021 1 commit
  6. 23 Dec, 2021 2 commits
  7. 22 Dec, 2021 3 commits
  8. 20 Dec, 2021 1 commit
  9. 17 Dec, 2021 6 commits
  10. 16 Dec, 2021 1 commit
  11. 15 Dec, 2021 2 commits
  12. 14 Dec, 2021 4 commits
  13. 11 Dec, 2021 1 commit
  14. 09 Dec, 2021 1 commit
  15. 07 Dec, 2021 2 commits
    • Martin Holst Swende's avatar
      core, eth: improve delivery speed on header requests (#23105) · db03faa1
      Martin Holst Swende authored
      This PR reduces the amount of work we do when answering header queries, e.g. when a peer
      is syncing from us.
      
      For some items, e.g block bodies, when we read the rlp-data from database, we plug it
      directly into the response package. We didn't do that for headers, but instead read
      headers-rlp, decode to types.Header, and re-encode to rlp. This PR changes that to keep it
      in RLP-form as much as possible. When a node is syncing from us, it typically requests 192
      contiguous headers. On master it has the following effect:
      
      - For headers not in ancient: 2 db lookups. One for translating hash->number (even though
        the request is by number), and another for reading by hash (this latter one is sometimes
        cached).
        
      - For headers in ancient: 1 file lookup/syscall for translating hash->number (even though
        the request is by number), and another for reading the header itself. After this, it
        also performes a hashing of the header, to ensure that the hash is what it expected. In
        this PR, I instead move the logic for "give me a sequence of blocks" into the lower
        layers, where the database can determine how and what to read from leveldb and/or
        ancients.
      
      There are basically four types of requests; three of them are improved this way. The
      fourth, by hash going backwards, is more tricky to optimize. However, since we know that
      the gap is 0, we can look up by the parentHash, and stlil shave off all the number->hash
      lookups.
      
      The gapped collection can be optimized similarly, as a follow-up, at least in three out of
      four cases.
      Co-authored-by: 's avatarFelix Lange <fjl@twurst.com>
      db03faa1
    • rjl493456442's avatar