K

In this example there is not much to do. You want to show 8000 items, no matter what you do you will have to print 8000 items and this is high cost.Perhaps in a different structure that you do not have, you could have a minimum gain if you could access flatly, but would not even compensate. And PHP is not language for that. The gain would be marginal. This is a linear O(n) complexity, no matter what you do.The complexity seems to be worse because of apparent multiplication, but actually the structure has all the items of array all, the dimensions are only partitions of the whole, the complexity has to be measured throughout.If you had to pick up an item from those 8000 there would have enough to do, which could even achieve constant complexity O(1). But it is possible that you would need to use a different structure, but it is not something that PHP is good. Depending on the requirement and guarantees of the structure, if it does not give O(1), it could give logarithmic complexity O(log n).The(1) and O(log n) are great for large volumes (for small ones can be even worse in practice), but many algorithms can only be made in O(n) even. Anything that requires access to the whole will have at least linear complexity, no matter what you do. Unless, of course, the data can be obtained in a procedural way, but there is no accessing data from a structure and yes calculating the result based on a formula. Even if this was possible in the question algorithm would still be linear.If you have 1000 x 1000 x 1000 x 1000 x 1000 items, you have 1 trillion items to evaluate in algorithm that needs all of them, you do not have what to do, computer and math do not miracle.If you have a quadratic algorithm O(n2) It is usually bad, but nothing unmanageable until a certain amount of items (miles or up to millions). It will be bad, but you can work. You have many studies to avoid getting here because "many" problems seem to have that need.Starts to be a problem in exponential complexity O(2n) where only 100 items will produce an algorithm that needs to make 1,267,650,600,228,229,401,496,703,205,376 (more than 1 nonillion) operations. That's brutal. To give a parameter a processor makes less than 1 billion operations very simple per second. In anything realistic a computer would take years or centuries to process it. You couldn't think of the number with 1000 items.Only factor complexity O(n!) is worse than this, where in each step increases the distance of the step. Then what starts quieter, quickly becomes inviable. 100 daria 933262154439441526816992388562667004907159682643816214685929638952175993229915608941463976156518286253697920827223758251185210916864000000000000000000000000000000000000. It's close to one Apart from the complexities that should be avoided in all possible ways, you can better see the differences between what actually happens. See https://pt.stackoverflow.com/a/56868/101 where I took the images.He probably believes that nested ties are a problem because he believes in "good practices", that is, he thinks there are rules that always followed solve all problems. In fact everything depends on context.You can even slightly improve performance if you make the structure flat (one dimension) because it would have a loop. This will not diminish the complexity of the algorithm, but will bring a minimum gain. Optimizations should not give minimum gains, unless in many specific cases. The gains with the reduction of complexity are much more important, and in this case there is no way to reduce complexity, nor even accepting any other disadvantage, unless you have something in the dataset that allows some specific optimization, which is very rare to occur.