Thursday, 19 September 2024

SQL Server – How to get sizes of all databases on a server

 To get all database size information, we can use the sys.master_files catalog view in the instance. This view contains a row per file of a database for all databases.

The columns of interest for retrieving database size information are:


Column nameData typeDescription
database_idintID of the database to which this file applies. The database_id for the master database is always 1.
file_idintID of the file within database. The primary file_id is always 1.
file_guiduniqueidentifierUnique identifier of the file.

NULL = Database was upgraded from an earlier version of SQL Server (Valid for SQL Server 2005 (9.x) and earlier versions).
typetinyintFile type:

0 = Rows

1 = Log

2 = FILESTREAM

3 = Identified for informational purposes only. Not supported. Future compatibility is not guaranteed.

4 = Full-text (Full-text catalogs earlier than SQL Server 2008 (10.0.x); full-text catalogs that are upgraded to or created in SQL Server 2008 (10.0.x) and later versions report a file type 0.)
type_descnvarchar(60)Description of the file type:

ROWS
LOG
FILESTREAM
FULLTEXT (Full-text catalogs earlier than SQL Server 2008 (10.0.x).)
data_space_idintID of the data space to which this file belongs. Data space is a filegroup.

0 = Log files
namesysnameLogical name of the file in the database.
physical_namenvarchar(260)Operating-system file name.
statetinyintFile state:

0 = ONLINE

1 = RESTORING

2 = RECOVERING

3 = RECOVERY_PENDING

4 = SUSPECT

5 = Identified for informational purposes only. Not supported. Future compatibility is not guaranteed.

6 = OFFLINE

7 = DEFUNCT
state_descnvarchar(60)Description of the file state:

ONLINE
RESTORING
RECOVERING
RECOVERY_PENDING
SUSPECT
OFFLINE
DEFUNCT

For more information, see File States.
sizeintCurrent file size, in 8-KB pages. For a database snapshot, size reflects the maximum space that the snapshot can ever use for the file.

Note: This field is populated as zero for FILESTREAM containers. Query the sys.database_files catalog view for the actual size of FILESTREAM containers.
max_sizeintMaximum file size, in 8-KB pages:

-1 = File grows until the disk is full.

268435456 = Log file grows to a maximum size of 2 TB.

Note: Databases upgraded with an unlimited log file size report -1 for the maximum size of the log file.

Note: If max_size = -1 and growth = 0, then no growth is allowed.
growthint0 = File is fixed size and doesn't grow.

> 0 = File grows automatically.

If is_percent_growth = 0, growth increment is in units of 8-KB pages, rounded to the nearest 64 KB.

If is_percent_growth = 1, growth increment is expressed as a whole number percentage.
is_media_read_onlybit1 = File is on read-only media.

0 = File is on read/write media.
is_read_onlybit1 = File is marked read-only.

0 = file is marked read/write.
is_sparsebit1 = File is a sparse file.

0 = File isn't a sparse file.

For more information, see View the Size of the Sparse File of a Database Snapshot (Transact-SQL).
is_percent_growthbit1 = Growth of the file is a percentage.

0 = Absolute growth size in pages.
is_name_reservedbit1 = Dropped file name is reusable. A log backup must be taken before the name (name or physical_name) can be reused for a new file name.

0 = File name is unavailable for reuse.
create_lsnnumeric(25,0)Log sequence number (LSN) at which the file was created.
drop_lsnnumeric(25,0)LSN at which the file was dropped.
read_only_lsnnumeric(25,0)LSN at which the filegroup that contains the file changed from read/write to read-only (most recent change).
read_write_lsnnumeric(25,0)LSN at which the filegroup that contains the file changed from read-only to read/write (most recent change).
differential_base_lsnnumeric(25,0)Base for differential backups. Data extents changed after this LSN are included in a differential backup.
differential_base_guiduniqueidentifierUnique identifier of the base backup on which a differential backup is based.
differential_base_timedatetimeTime corresponding to differential_base_lsn.
redo_start_lsnnumeric(25,0)LSN at which the next roll forward must start.

NULL unless state = RESTORING or state = RECOVERY_PENDING.
redo_start_fork_guiduniqueidentifierUnique identifier of the recovery fork. The first_fork_guid of the next log backup restored must match this value. This represents the current state of the container.
redo_target_lsnnumeric(25,0)LSN at which the online roll forward on this file can stop.

NULL unless state = RESTORING or state = RECOVERY_PENDING.
redo_target_fork_guiduniqueidentifierThe recovery fork on which the container can be recovered. Paired with redo_target_lsn.
backup_lsnnumeric(25,0)The LSN of the most recent data or differential backup of the file.
credential_idintThe credential_id from sys.credentials used for storing the file. For example, when SQL Server is running on an Azure Virtual Machine and the database files are stored in Azure Blob Storage, a credential is configured with the access credentials to the storage location.

Using this information we can retrieve database sizes using below query:


SELECT DB_NAME(database_id) AS DBName,Name AS Logical_Name,Physical_Name,(size*8)/1024 SizeMB, CASE WHEN type = 0 THEN 'mdf_file'  WHEN  type = 1  THEN 'log_file' END File_type

FROM sys.master_files  



SELECT [Database Name] = DB_NAME(database_id),

       [Type] = CASE WHEN Type_Desc = 'ROWS' THEN 'Data File(s)'

                     WHEN Type_Desc = 'LOG'  THEN 'Log File(s)'

                     ELSE ISNULL(Type_Desc,'Total') END,

       [Size in MB] = CAST( ((SUM(Size)* 8) / 1024.0) AS DECIMAL(18,2) )

FROM   sys.master_files

--Uncomment if you need to query for a particular database

-- WHERE      database_id = DB_ID(‘Database Name’)

GROUP BY      GROUPING SETS

              (

                    (DB_NAME(database_id), Type_Desc),

                    (DB_NAME(database_id))

              )

ORDER BY      DB_NAME(database_id), Type_Desc DESC

GO






SQLDBA - Get Space Used by Tables and Indexes in SQL Server

 Get Space Used by Tables and Indexes in SQL Server Databases can consume significant amounts of storage, so it’s important to understand ho...