Some people say that the overhead for having nullable column is 1 bit per row, some that it is 1 byte per row Which one is true ? I assume is 1 bit per row (so 8 rows make 1 byte), am I right ?
The null bitmap is stored at the row level so 1 byte is needed for each 8 columns of every row. The overhead is incurred for each row regardless of whether the column is defined as nullable or not (since SQL 2000).
For NULL values in varchar, there is still 2 byte overhead per row - is it actually true ?
Column offset is stored for NULL variable-length columns except for the last one(s) so NULL values in variable length may incur a 2-byte overhead depending on the position.
Most important question: I have a table with ~250 million rows of data, total size is ~115 GB I've added 7 columns using below code, checking the size of the table using sp_spaceused after each command
This is where it gets interesting. SQL Server Enterprise Edition (or Developer Edition) includes optimizations such that some schema modifications are only meta-data operations. If you run the same test on Standard or Express edition, space will double.
See https://www.sqlskills.com/blogs/paul/inside-the-storage-engine-anatomy-of-a-record/ . Although old, most still applies to current SQL Server version, including Azure SQL Database.