There’s a lot that can be said about what the correct ways are to split up a database over multiple disks, and on a windows system the advantages are fairly self evident when you consider that Windows only allows four (4) concurrent read / write operations per LUN. This article isn’t going to cover the why but rather the how you would go about doing this, with a specific interest in ways to redistribute a single database file (MDF file) across multiple database files spread across more than one disk.
The first method is the simplest as it requires very little planning. Within the same database filegroup, add one or more datafiles – making sure you place these files on separate LUNs (or mount points). Ensure also that these new files are also large enough so that when SQL considers the greatest percentage free space, that the new files are where the database is redistributed across. If these new files are too small (even with growth configured), you’ll only end up doing an index rebuild within the same database file. Once ready, you’ll only need to rebuild the clustered indexes of your database tables to spread the data across each of the files within that same filegroup.
The second method requires a little more planning as it targets individual tables (ideally the larger tables in the DB which have fairly high transaction load, such as audit or log tables) and does a full move of that table – rather than redistribution. This is done by creating a new filegroup, and within that new filegroup adding your data files – remembering to place them onto separate disks / mounts. It’s the new filegoup part of this process which allows us to then explicitly define where we want to rebuild the clustered index to, which is how we’d go about moving that database table out of the original database file into the newly created on. The command you’d use to do this would be as follows:
CREATE UNIQUE CLUSTERED INDEX PK_YourTableName
WITH (DROP_EXISTING = ON) ON [NewFilegroup]
So the key differences to note here; multiple data files within the same file group will simply distribute the data out over all those data files using the greatest percentage free space algorithm. By contrast, having separate filegroups defined allows us to specifically remove high transaction load and/or large DB tables out from the primary filegroup and into its own – thereby removing that table’s I/O load to it’s own separate disk.