Postgres - how to manage intensively growing DB



  • I have intensively growing DB, what generates noticeable storage cost (buying new SSDs, increasing backup license etc). In addition to that bigger and bigger maintenance window is required (to finish backup jobs)

    I wonder how you Guys addressing that in your PROD systems ?

    My 1st idea was to split biggest tables (and indices) to 'historical' and 'most current' data parts. Keep 'historical' piece on slower, but less expensive HDDs, but 'most current' on SSDs Also do not backup 'historical' part until it changes

    Thanks to that I can generate some savings (not needed investments in expanding SSDs arrays) and reduce cost of backups (will not be backing up 'historical' piece over and over) Still Id be able to access to 'historical data', but (I guess a bit slower)

    However I have doubts about performance impact and potential consistency breaks (Performance is key thing in my business)

    Do you have any experience in that? Any thoughts / ideas ?

    PS> I have PG12, but planning migration to PG14 soon



  • Partition your tables by range. Create an additional tablespace on the slow, cheap storage. Move old partitions to the slow tablespace (and drop partitions with very old data).

    You should have partitioned tables from the beginning. To transition, you could create a new partitioned table and define your current table as a partition of it.

    Another way to partition an existing table would be to use logical replication for the upgrade and replicate into a partitioned table.




Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2