Why MERGE doesn't insert more than 277 records into a table which is configured with temporal table and a non-clustered index on history table

All we need is an easy explanation of the problem, so here it is.

I found again an issue with SQL Server and MERGE statement and need some confirmation.

I can reproduce my issue constantly on a Azure Database (but not on a on premise SQL Server 2017/2019).

Please execute following steps (step by step, not in one command execution)!

1) Script for Schema:

    CREATE TABLE [dbo].[ImpactValueHistory]
    (
        [Rn] BIGINT NOT NULL,

        [ImpactId] UNIQUEIDENTIFIER NOT NULL,
        [ImpactValueTypeId] INT NOT NULL,

        [Date] DATE NOT NULL,
        [Value] DECIMAL(38, 10) NOT NULL,

        [ValidFrom] DATETIME2 NOT NULL CONSTRAINT [DF_ImpactValueHistory_ValidFrom] DEFAULT CONVERT(DATETIME2, '0001-01-01'),
        [ValidTo] DATETIME2 NOT NULL CONSTRAINT [DF_ImpactValueHistory_ValidTo] DEFAULT CONVERT(DATETIME2, '9999-12-31 23:59:59.9999999'),

        [ImpactPeriodId] INT NOT NULL,

        [NormalizedValue] DECIMAL(38, 10) NOT NULL,
    )
    GO

    CREATE CLUSTERED COLUMNSTORE INDEX [COLIX_ImpactValueHistory]
        ON [dbo].[ImpactValueHistory];
    GO

    CREATE NONCLUSTERED INDEX [IX_ImpactValueHistory_ValidFrom_ValidTo_ImpactId_DimensionItemId]
        ON [dbo].[ImpactValueHistory] ([ValidFrom], [ValidTo], [ImpactId], [ImpactValueTypeId], [Date]);
    GO


    CREATE TABLE [dbo].[ImpactValue]
    (
        [Rn] BIGINT NOT NULL IDENTITY(1,1),

        [ImpactId] UNIQUEIDENTIFIER NOT NULL,
        [ImpactValueTypeId] INT NOT NULL,

        [Date] DATE NOT NULL,
        [Value] DECIMAL(38, 10) NOT NULL,

        [ValidFrom] DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL CONSTRAINT [DF_ImpactValue_ValidFrom] DEFAULT CONVERT(DATETIME2, '0001-01-01'),
        [ValidTo] DATETIME2 GENERATED ALWAYS AS ROW END NOT NULL CONSTRAINT [DF_ImpactValue_ValidTo] DEFAULT CONVERT(DATETIME2, '9999-12-31 23:59:59.9999999'),

        [ImpactPeriodId] INT NOT NULL,

        [NormalizedValue] DECIMAL(38, 10) NOT NULL,

        PERIOD FOR SYSTEM_TIME ([ValidFrom], [ValidTo]),

        CONSTRAINT [PK_ImpactValue] PRIMARY KEY NONCLUSTERED ([ImpactId], [ImpactValueTypeId], [Date], [ImpactPeriodId])
    )
    WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = [dbo].[ImpactValueHistory]))
    GO

    CREATE UNIQUE CLUSTERED INDEX [IX_ImpactValue_Id] ON [dbo].[ImpactValue]([Rn])
    GO

    CREATE COLUMNSTORE INDEX [CIX_ImpactValue] ON [dbo].[ImpactValue] ([ImpactId], [ImpactValueTypeId], [Date], [Value], [NormalizedValue])
    GO

2) Script for inserting some random data

DECLARE @inserted0 TABLE ([Date] DATE, [ImpactId] uniqueidentifier, [ImpactPeriodId] int, [ImpactValueTypeId] int);
MERGE [dbo].[ImpactValue] USING (
SELECT TOP 278 -- <-- this number is critical
        DATEADD(MONTH, ROW_NUMBER() OVER(ORDER BY [Name]) - 1, '2000-01-01') AS [Date],
        NEWID() AS [ImpactId], 
        1 AS [ImpactPeriodId], 
        1 AS [ImpactValueTypeId], 
        99 AS [Value], 
        99 AS [NormalizedValue]
    FROM [sys].[all_columns]
) AS i ([Date], [ImpactId], [ImpactPeriodId], [ImpactValueTypeId], [Value], [NormalizedValue]) ON 1=0
WHEN NOT MATCHED THEN
INSERT ([Date], [ImpactId], [ImpactPeriodId], [ImpactValueTypeId], [Value], [NormalizedValue])
VALUES (i.[Date], i.[ImpactId], i.[ImpactPeriodId], i.[ImpactValueTypeId], i.[Value], i.[NormalizedValue])
OUTPUT INSERTED.[Date], INSERTED.[ImpactId], INSERTED.[ImpactPeriodId], INSERTED.[ImpactValueTypeId]
INTO @inserted0;

SELECT * FROM @inserted0

This step should return as a result all the inserted rows!

3) Delete data from Step 2)
This step is filling up the configured history table

DELETE [dbo].[ImpactValue]

4) Insert again some random data
You can use script from step 2)

I have to note, that steps 1) – 4) should be executed separately and not in GO.

Again this step should return as a result all the inserted rows! But it doesn’t!
On my side I always get an empty result. This is reproducible on three of our production databases 🙁

The MERGE statement is generated by EF Core and currently I’m workaround against this by setting a Max Batch Size. But this couldn’t be the final solution.

It must have something to do with Temporal Tables with a configured non-clustered index on the temporal table.

See also:

In the past I already stumbled over this issue:

But my current issue is only reproducible on a Azure SQL Database and doesn’t throw any error.

Interesting side notes:

  1. If I temporarily disable the temporal table -> it’s then working
  2. If I delete the non-clustered index [IX_ImpactValueHistory_ValidFrom_ValidTo_ImpactId_DimensionItemId] -> it’s working
  3. If I use SELECT TOP (@BatchSize) in step 2) –> it’s working
  4. If I use only OUTPUT instead of OUTPUT INTO @inserted0 –> it’s working

Without the COLUMNSTORE index on the history table it is working. By only removing the COLUMNSTORE index on the main table I see the same issue.

Actual execution plan for a case that (a) repros the issue with TOP 278 and (b) doesn’t repro with TOP (@BatchSize) available at https://1drv.ms/u/s!AsOa6e9ukBWQlIRg9_9eySDFp5hvEA?e=KBQBsP. I also added the actual execution plans for batch size of 277. Both are working with this size!

How to solve :

I know you bored from this bug, So we are here to help you! Take a deep breath and look at the explanation of your problem. We have many solutions to this problem, But we recommend you to use the first method because it is tested & true method that will 100% work for you.

Method 1

Azure SQL Database sometimes builds an invalid execution plan for your merge-insert.

When it decides to maintain the column store history table using a single operator (a narrow plan) everything is fine. This trivially includes the case where the history table has no secondary indexes.

When it decides to maintain the history table using separate operators for the base table and secondary indexes (a wide plan) things go wrong when using the OUTPUT INTO option. The choice of plan is sensitive to cardinality estimates.

For example, the plan for OUTPUT only (not writing to a table variable) includes a table spool. The spool saves the rows before the Filter that removes any rows from the stream where ValidTo is null. The spool then replays the (unfiltered) rows for return to the client:

Why MERGE doesn't insert more than 277 records into a table which is configured with temporal table and a non-clustered index on history table

When OUTPUT INTO is used, the same stream is used for both maintaining the secondary index on the history table and providing rows for the output table. This creates a problem because a pure insert results in no rows added to the history, so all rows are filtered out.

Why MERGE doesn't insert more than 277 records into a table which is configured with temporal table and a non-clustered index on history table

A column store index is not required on the history table for this problem to manifest.

This is a product defect you should report directly to Microsoft Support by creating a support request in the Azure Portal.

Side note: rows arriving at the history table Index Insert are not actually inserted because the action column tells it not to. These details aren’t exposed in showplan unfortunately. A possible fix would be to test the action as well as ValidTo in the filter.


The extra Filter does not appear on SQL Server 2019 CU16-GDR:

Why MERGE doesn't insert more than 277 records into a table which is configured with temporal table and a non-clustered index on history table

This feels like a bug fix for implied nullability problems that has been applied to Azure SQL Database before the box product. If so, it is a little surprising it does not react to QO compatibility level hints.


Daniel C.:

Microsoft confirmed this as a bug and also rolled-out a quick fix for one of our affected databases. I can confirm that this fix is resolving my issue.

Note: Use and implement method 1 because this method fully tested our system.
Thank you 🙂

All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply