- #1

MathBoard001

- 2

- 0

Hello to everyone!

I'm having a problem, banal in form, but perhaps complex in practical application.

Let's suppose we have 300 functions \(\displaystyle y = f (x)\) of random trend and we want to group these functions into 10 subgroups (each group consisting of 30 functions).

However, this grouping must not be random, but such that the sum of the areas subtended by these functions is the 'minimum' possible in each group (there should be a sort of balance between the 10 soubgroups, preferably the best balance).

According to you which iterative method, it would be preferable for a problem like this?

Comparing the functions one by one is definitely unthinkable but of course I would like to use a robust algorithm that programatically would be able to subgroup in a good (not necessary perfect) way.

Do you have any precious suggestions?

I'm having a problem, banal in form, but perhaps complex in practical application.

Let's suppose we have 300 functions \(\displaystyle y = f (x)\) of random trend and we want to group these functions into 10 subgroups (each group consisting of 30 functions).

However, this grouping must not be random, but such that the sum of the areas subtended by these functions is the 'minimum' possible in each group (there should be a sort of balance between the 10 soubgroups, preferably the best balance).

**The ideal solution is to have \(\displaystyle Area = 0 \) for each subgroup!**According to you which iterative method, it would be preferable for a problem like this?

Comparing the functions one by one is definitely unthinkable but of course I would like to use a robust algorithm that programatically would be able to subgroup in a good (not necessary perfect) way.

Do you have any precious suggestions?

*ps. Sorry for my English, maybe not perfect!*
Last edited: