Obviously, you can't test everything. The question then is, how can you test as many of the most important things in the time you have available? This is a big question as written, and I'm only going to be able to cover it at a very high level. Apologies if I am not answering at the level you wanted, or if this answer is too basic
First of all, try and come up with equivalence classes. There should be some broad categories of error types that you can test for, specifically. Wrapping around the edge, resizing the screen, large numbers of blocks, etc. Do a rough sort of these equivalence classes in priority order, and start writing automated tests for them from highest priority to lowest. Prioritize based on the reduction in risk you think you will achieve for the time it takes to automate that test. Deprioritize tests that are hard to automate, or put them on a list of manual tests. These automated tests will become your regression test suite.
You may find that there are some cases where you have lots of parameters and cannot test all the combinations; in these cases, you may have to do some research into combinatorial testing to work out the proper way to get the best coverage with the time and resources you have.
Work with the developers (who absolutely need to be writing unit tests - you won't have time to do all the unit tests as well unless your dev / test ratio is less than 1) to ensure that tests are being written at the proper level. Many things should be covered in unit tests with mocks that may not need additional coverage in integration tests.
Test as much as you can without the UI; UI tests are the most expensive to maintain. Anything you can test at the API level or lower will be more free time to spend on exploratory testing (below). Make sure your tests at the UI level include saving screen shots on failure, so you can see what the error looks like.
In addition, plan on investing in manual exploratory testing. There are a lot of approaches to exploratory testing; one you should probably start with is taking the equivalence classes you haven't automated yet and start testing around them, manually, taking careful notes as you go. You will probably discover additional equivalence classes and tests as you go - either perform them as you think of them, or capture them somewhere so you can add them to your prioritized list later. When you find bugs, make sure you add automated regression tests for those bugs or create a workitem / JIRA ticket / etc. to track the missing test.
Exploratory testing is something of an art, and there are many techniques that people use to be more successful. James Bach and James Whittaker are both good resources, IME. James Bach has a good introduction to exploratory testing, and James Whittaker's exploratory testing tours may also be helpful. These "tours" are different ways to look at your software to uncover different classes of bugs.
If you have the time and resources, I would investigate to see if you can create some automated exploratory tests. These are tests that generate scenarios to test according to some algorithm, usually involving some random behavior but also some logic based on what you think would be interesting to examine, and then run a series of checks to determine if the scenario that was generated might have a bug in it (Do the logs have exceptions? Are elements overlapping on the screen that shouldn't be? Is too much memory being used? Did you get a stack overflow? Etc.). You can probably come up with good checks based on bugs you find through normal regression testing and manual exploratory testing. If the test might be hitting a bug, the test will try to capture any useful information - screenshots, exception stack traces, etc. - and will save that information somewhere for you to go over the next day and triage for real bugs.
Automated exploratory tests can run overnight and cover far more "turf" than what you could do manually, but will miss some bugs that will be really obvious to the human eye. At the same time, they may also catch some bugs that aren't always visible, but could cause problems under some circumstances (especially memory issues, etc., which can be tedious to test by hand). Good design is critical to making these tests as useful as they can be, as you may want to use several different algorithms to generate scenarios and use configuration to alter how often you use which algorithm, so your automated exploratory tests can mature along with the code base. Note that this style of testing is very important for a high level of polish, but also is time-intensive and resource-intensive. You simply may not have time to engage in it.