All we need is an easy explanation of the problem, so here it is.
When a full database backup command is executed:
- Does the backup process use only the memory that has been assigned
to the SQL Server (instance)?
- Does the backup process “flush out” the page from the memory (buffer pool), the dirty buffer?
I want to know if it will impact the performance (regarding all the resources that it takes and its bad impact) by flushing out data.
How to solve :
I know you bored from this bug, So we are here to help you! Take a deep breath and look at the explanation of your problem. We have many solutions to this problem, But we recommend you to use the first method because it is tested & true method that will 100% work for you.
- Does the backup process use only the memory that has been assigned to the SQL Server (instance)?
The memory that you assign with MIN and MAX memory is just for Buffer Pool and for SQL Server 2012 and up, the memory manager has changed.
When a backup starts it creates a series of buffers, allocated from the memory outside the buffer pool. The target is commonly 4 MB for each buffer resulting in approximately four to eight buffers.
Don’t do it on PROD, just for your own learning benefit:
You can find out how much memory your backup is taking:
TRACE FLAG 3605: This trace flag will send the trace to SQL Server error logs.
TRACE FLAG 3213: This trace flag gives information about backup and restore throughput and other configurations.
Below is from my test server:
- Does the backup process "flush out" the page from the memory (buffer pool), the dirty buffer?
Yes, as it forces a checkpoint to occur. A full backup will force a database checkpoint which flushes all updated-in-memory pages to disk before anything is read by the backup.
If you want to estimate the amount of total buffer memory that a particular full database backup to a physical disk would use:
-- Estimate the amount of total buffer memory that a particular full database backup to a physical disk would use -- Reference: http://blogs.msdn.com/b/sqlserverfaq/archive/2010/05/06/incorrect-buffercount-data-transfer-option-can-lead-to-oom-condition.aspx declare @MaxTransferSize float, @BufferCount bigint, @DBName varchar(255), @BackupDevices bigint -- Default value is zero. Value to be provided in MB. set @MaxTransferSize = 0 -- Default value is zero set @BufferCount = 0 -- Provide the name of the database to be backed up set @DBName = 'entities' --<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<----- CHANGE HERE YOUR DB NAME !!! -- Number of disk devices that you are writing the backup to set @BackupDevices = 1 declare @DatabaseDeviceCount int select @DatabaseDeviceCount=count(distinct(substring(physical_name,1,charindex(physical_name,':')+1))) from sys.master_files where database_id = db_id(@DBName) and type_desc <> 'LOG' if @BufferCount = 0 set @BufferCount = (@BackupDevices*3) + @BackupDevices + (2 * @DatabaseDeviceCount) if @MaxTransferSize = 0 set @MaxTransferSize = 1 select 'Total buffer space (MB): ' + cast ((@Buffercount * @MaxTransferSize) as varchar(10))
Note: Use and implement method 1 because this method fully tested our system.
Thank you 🙂