Is there any purpose of Ola Hallengren's SQL Server Index and Statistics Maintenance scripts on sql server running on SAN storage?
I have configured Ola Hallengren's backup and integrity check scripts.
However I want advice whether it is required to setup the SQL Server Index and Statistics Maintenance scripts, because my SQL server is running on a SAN infrastructure and I have read that there is no benefit to setting fill-factor to less than 100%, or to perform index reorganize or index rebuilds when the data files are on a SAN storage. Any advice will be appreciated.
Statistics maintenance is done automatically by SQL server so I don't see any benefit of configuring Ola's scripts for statistics maintenance.
Note that SAN storage means multiple disks are involved, and data (irrespective of whether the table has no index [heap] or clustered index, or non-clustered index) is scattered across multiple disks.
It is true that index fragmentation is mostly a thing in the past, if your database is architected properly, making index maintenance mostly moot these days, especially with modern hardware. One example where I have seen index maintenance necessary was on a third party application's database, where they didn't believe in clustered indexes, so all of their tables were heaps which would constantly get over 90% fragmented and perform poorly. But this is why I added the disclaimer "if your database is architected properly" to my first sentence. (Note this may have been on mechanical storage at the time, which could've exposed the problem of a fully fragmented heap even more so.)
Statistics maintenance is a different objective and still relevant today. Having up to date statistics helps lead to better cardinality estimates when the SQL Engine is choosing an execution plan to process your query. Poor cardinality estimates (generally when they're off by an order of magnitude or more) can result in really poor performance (regardless of what kind of hardware your server is on) due to a number of different improper decisions made in the execution plan.
For example, when there's an underestimate for the number of rows the Engine thinks is going to be returned for a given query, it may under-request how much Memory is needed to process the query, which will starve the execution plan of adequate resources to run. Conversely, an overestimate can result in over-provisioning of resources to your query's execution plan which also takes time to allocate, and then hogs those resources from the rest of the server. If I recall correctly, a single query can take up to 25% of the instance's allocated Memory.
Another type of performance bottleneck that can result from poor cardinality estimates is improper operations being used in the execution plan that was chosen, particularly when data is being joined together. For example, an underestimate may result in the Nested Loops Join operator to be chosen, because it is usually more performant for smaller data sets, but a Hash or Merge Join operator would've been actually more performant in this scenario, had the cardinality estimates been accurate.
Statistics aren't the end all be all for guaranteeing always having good cardinality estimates, but it's one factor that can help (or harm when the statistics of your tables are outdated) those estimates. Updating statistics at a frequency that makes sense relative to how large your data is and how frequently those tables change is key. It is not a blocking operation to update statistics, but it can utilize a decent amount of server resources to do so.
Regarding Ola Hallengren's scripts, I'm not an expert on them as I don't personally use them (yet) but my understanding is they provide better granularity and configurability to schedule maintenance more appropriately for your use cases. They're also supposed to be more reliable than scheduling maintenance with the out of the box Maintenance Plans. For these reasons, I believe they are the more popular choice for maintenance.