All we need is an easy explanation of the problem, so here it is.
As I understand it, when you define a column on a table you define its precision. This precision takes 1 byte and is stored at the column level. If you use a precision of 5 or more, then a DateTime2 column will take 8 bytes per row. (The precision is not stored at the row level.)
But when you convert that same DateTime2 as a VarBinary, it will take 9 bytes. That is because it needs the precision byte that is stored at the column level.
I am curious how this relates to when a DateTime2 is stored in memory. Say I have 1,000,000 DateTime2s in memory (each with a precision of 5 or more). Will that take up 8,000,000 bytes of memory, or 9,000,000 bytes of memory?
Basically, I would like to know if a default precision DateTime2 will cause more pressure on Page Life Expectancy than a normal DateTime?
How to solve :
I know you bored from this bug, So we are here to help you! Take a deep breath and look at the explanation of your problem. We have many solutions to this problem, But we recommend you to use the first method because it is tested & true method that will 100% work for you.
I am curious how this relates to when a DateTime2 is stored in memory.
In SQL Server data on disk is identical to data in memory*. Data Pages are copied from disk to memory and back, which would be expensive if the data was transformed when reading or flushing.
Say I have 1,000,000 DateTime2s in memory (each with a precision of 5 or more). Will that take up 8,000,000 bytes of memory, or 9,000,000 bytes of memory?
8,000,000 since storing a page in memory is completely unrelated to converting a DateTime2 to varbinary.
* Exceptions: In-Memory OLTP tables are an exception, and are substantially different in memory and on disk. And Transparent Database Encryption tables are decrypted as they are read into memory and encrypted as they are flushed to disk.
And as documented for datetime2:
Property Value Storage size1 6 bytes for precision less than 3.
7 bytes for precision 3 or 4.
All other precision require 8 bytes.2
1 Provided values are for uncompressed rowstore. Use of data compression or columnstore may alter storage size for each precision. Additionally, storage size on disk and in memory may differ. For example, datetime2 values always require 8 bytes in memory when batch mode is used.
2 When a datetime2 value is cast to a varbinary value, an additional byte is added to the varbinary value to store precision.
A default precision DATETIME2 will not cause more pressure on PLE compared to DATETIME. The buffer pool consists of 8-KB pages. The page count is what matters as opposed to the internal storage workings of each page. It isn’t really correct to say that 1 million column values will take 8 million or 9 million bytes. Quoting from the documentation:
In SQL Server, A buffer is an 8-KB page in memory, the same
size as a data or index page. Thus, the buffer cache is divided into
8-KB pages. A page remains in the buffer cache until the buffer
manager needs the buffer area to read in more data. Data is written
back to disk only if it is modified. These in-memory modified pages
are known as dirty pages. A page is clean when it is equivalent to its
database image on disk. Data in the buffer cache can be modified
multiple times before being written back to disk.
Also called buffer cache. The buffer pool is a global
resource shared by all databases for their cached data pages. The
maximum and minimum size of the buffer pool cache is determined during
startup or when the instance of SQL server is dynamically reconfigured
by using sp_configure. This size determines the maximum number of
pages that can be cached in the buffer pool at any time in the running
Note: Use and implement method 1 because this method fully tested our system.
Thank you 🙂