"They worked from the premise that the most creative art was that which broke most from the past, and then inspired the greatest visual shifts in the works that followed."
"Their experiment—which involved two datasets totalling more than 62,000 paintings—was entirely automated. They gave the computer no information about art history."
I don't get it. They used a pretty reasonable definition of what makes creative art, then made a program to recognize it (but that at the same time couldn't possibly do so), and then when they go the results they wanted they thought it meant something? I'm going to assume that this is just bad science writing and their paper is actually interesting.
That part doesn't make sense indeed, and it's in the original paper too, but what I found interesting is that they could code an algorithm to evaluate art, which is pretty important and I'm sure many companies will find this information valuable.
Perhaps with any sufficiently large data set that spans the periods, the algorithm can determine temporal relatedness by spatiochromatic relatedness.
The program determines that art which 1.) breaks with the past, and 2.) inspires imitators. It seems to me that the imitators are determining what should be valued as creative and the program simply tallies the result of this 'poll'. It's interesting you can get a computer to do it, but the results would already be known to people.
Separate names with a comma.