What is Log Generation Rate?
Log generation rate refers to the speed at which transaction logs are produced in a database. In Hyperscale Pool, log generation is closely monitored and regulated to prevent overloading the system. Azure implements log rate governance to ensure that log generation stays within defined limits, keeping the system stable and performing efficiently.
Log Rate Governance in Hyperscale
By default, Hyperscale databases have a log generation limit of 105 MB/s, irrespective of the compute size. If everything is running smoothly, the log generation can reach 100 MiB/s. This is designed to ensure that logs are consistently processed and replicated without overwhelming system resources.
However, there may be situations where Azure needs to temporarily reduce the log generation rate. This happens when a secondary replica or page server falls behind in applying the transaction logs. The system will then throttle the log generation rate to allow the lagging components to catch up, ensuring the overall stability of the database.
When Does Log Generation Rate Get Reduced?
Log generation rate may be reduced for several reasons:
Delayed log consumption by a page server or replica.
A geo-secondary replica might be lagging in applying logs.
Slow database checkpointing could delay log processing on the page server.
Migration or reverse migration from Hyperscale to a non-Hyperscale database can also cause temporary delays in log consumption.
Monitoring Log Generation Rate with sys.dm_hs_database_log_rate
Azure provides the sys.dm_hs_database_log_rate dynamic management function (DMF) to monitor and troubleshoot log generation rates in Hyperscale. This function returns detailed information on which components are limiting the log generation rate, including:
Current log rate limit
Catch-up rate of components (bytes per second)
Component-specific delays and logs that are behind
Key Columns in the DMF:
current_max_log_rate: Maximum log rate limit in bytes per second.
catchup_rate: The rate at which lagging components are catching up.
catchup_bytes: The amount of log data that must be processed to catch up.
role_desc: Describes the role of the component affecting log rate, such as a page server, replica, or geo-replica.
This tool helps you quickly identify any components causing delays and allows you to take corrective actions if needed.
How to Check Log Generation Rate in Your Database
To check the log generation rate for a specific database, use the following query:
SELECT current_max_log_rate_bps, role_desc, catchup_rate_bps, catchup_distance_bytes
FROM sys.dm_hs_database_log_rate(DB_ID(N'YourDatabaseName'));
For databases within an elastic pool, you can use NULL to get results for all databases in the pool:
SELECT current_max_log_rate_bps, role_desc, catchup_rate_bps, catchup_distance_bytes
FROM sys.dm_hs_database_log_rate(NULL);
Wait types appear in sys.dm_os_wait_stats when the log rate is reduced:
Wait type Reason
RBIO_RG_STORAGE Delayed log consumption by a page server
RBIO_RG_DESTAGE Delayed log consumption by the long-term log storage
RBIO_RG_REPLICA Delayed log consumption by an HA secondary replica or a named replica
RBIO_RG_GEOREPLICA Delayed log consumption by a geo-secondary replica
RBIO_RG_DESTAGE Delayed log consumption by the log service
RBIO_RG_LOCALDESTAGE Delayed log consumption by the log service
RBIO_RG_STORAGE_CHECKPOINT Delayed log consumption on by a page server due to slow database checkpoint
RBIO_RG_MIGRATION_TARGET Delayed log consumption by the non-Hyperscale database during reverse migration