# Rendering the Feigenbaum tree

by computerex
Tags: feigenbaum, rendering, tree
 P: 68 Hello. I am attempting to render the bifurcation diagram of the logistic difference equation in C with OpenGL. I think the math behind it is simple enough, but for some reason I am ending up with a structure that resembles the tree but doesn't look quite like it. I would really appreciate it if someone could point out where I am going wrong. Here is how it should look like: http://brain.cc.kogakuin.ac.jp/~kana...aos/e/BifArea/ http://planetmath.org/encyclopedia/F...umFractal.html The following program produces the output attached to this post. As you can see, you can distinguish that the structure seems to be correct.  #include #include #include #include #include #define NUM_OF_ITT 100.0 // number of itterations #define NUM_OF_SETS 1000 // number of ks that will be tested double xstart = 0.1; // x0 double dataPoints[NUM_OF_SETS][(int)NUM_OF_ITT][2]; // void calculatePlot(double k, unsigned int inx) { double result = k*xstart*(1.0-xstart); dataPoints[inx][0][0]=k; dataPoints[inx][0][1]=result; for ( unsigned int i = 1; i < NUM_OF_ITT; i++ ) { result = k * result*(1.0-result); dataPoints[inx][i][0]=k; dataPoints[inx][i][1]=result; } } void handleKeyPress(unsigned char key, int x, int y) { switch (key) { case 27: exit(0); } } void initRendering() { glEnable(GL_DEPTH_TEST); glLineWidth(1.0); double k = 1; double kstep = (4.0-k)/(double)NUM_OF_SETS; for ( unsigned int i = 0; i < NUM_OF_SETS; i++ ) { calculatePlot(k, i); k+=kstep; } } void handleResize(int q, int z) { glViewport(0, 0, q, z); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, (double)q / (double)z, 1.0, 200.0); } void drawScene() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glTranslatef(-3, 0, -5); glColor3f(0.0, 1.0, 0.0); // not rendering the transient states glBegin(GL_LINE_STRIP); for ( unsigned int set = 0; set < NUM_OF_SETS; set++ ) { glVertex2f(dataPoints[set][0][0], dataPoints[set][0][1]); } glEnd(); glBegin(GL_LINE_STRIP); for ( unsigned int set = 0; set < NUM_OF_SETS; set++ ) { glVertex2f(dataPoints[set][(int)NUM_OF_ITT-1][0], dataPoints[set][(int)NUM_OF_ITT-1][1]); } glEnd(); glutSwapBuffers(); } void update(int value) { glutTimerFunc(10, update, 0); } void idlefunc() { glutPostRedisplay(); } void mouse(int button, int state, int x, int y) { } int main(int argc, char **argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB |GLUT_DEPTH); glutInitWindowSize(400, 400); glutCreateWindow("Logistcs"); initRendering(); glutDisplayFunc(drawScene); glutIdleFunc(idlefunc); glutKeyboardFunc(handleKeyPress); glutReshapeFunc(handleResize); glutMouseFunc(mouse); glutTimerFunc(100, update, 0); glutMainLoop(); return 0; } Attached Thumbnails
 Engineering Sci Advisor HW Helper Thanks P: 6,348 I think you are only displaying the data point for the last iteration. Print out say the last few iteration values in the datapoint array, in each call to calculateplot(). When k is below 3, you will find they are all the same. Then, they start to flip between 2 diffeent values. Then they go round a cycle of 4 different values, then 8, etc. Later on, they are chaotic. Later still they will repeat in cycles with other lengths, like 6 and 3. You need to plot all the different values to make the complete graphic. One way is to calculate say 1000 iterations, to wash out the effect of your choice of a starting value, and then plot say the next 100 iterations. Many of the 100 plotted points will lie on top of each other, but that doesn't matter.
P: 68
 Quote by AlephZero I think you are only displaying the data point for the last iteration.
Hello. Thank you for the reply.
Take a look at this:

double dataPoints[NUM_OF_SETS][(int)NUM_OF_ITT][2];
This multidimensional array contains the x-y coordinate data for each k that is going to be tested. The number of different k's tested will be NUM_OF_SETS k's. The difference of some k and the next k tested will be 3/NUM_OF_ITT. Here's the routine that precomputes the data for better rendering times. I think the problem may lie here:

void calculatePlot(double k, unsigned int inx)
{
double result = k*xstart*(1.0-xstart);
dataPoints[inx][0][0]=k;
dataPoints[inx][0][1]=result;
for ( unsigned int i = 1; i < NUM_OF_ITT; i++ )
{
result = k * result*(1.0-result);
dataPoints[inx][i][0]=k;
dataPoints[inx][i][1]=result;
}
}
0 <= inx < NUM_OF_SETS. Note how the x coordinate is always set to the k being tested. I am not sure whether this is the right way of doing it. Some implementations plot (g(x), x) where g(x) = X(n+1) = k*Xn*(Xn-1). Others simply have the k being tested as the x value. Other still use (x, g(x)).

This is the rendering procedure, which is relatively simple. Note, right now for each k being tested the x value for each point is k itself, and only y varies. And so you would expect to see a vertical line for each k, with the points between dataPoints[set][0] and dataPoints[set][NUM_OF_ITT-1] being transient states. Therefore those points are not rendered here. However, that data is still available and can be rendered. Indeed originally it was rendered, but I saw only vertical columns connecting the top outline and the bottom outline. So I simply decided to omit those points from rendering.

The mechanism of rendering is simple: We loop through all the sets (the data for all the k's tested) and render a line strip using the first point calculated. Then the same procedure is repeated, rendering the last data point in the array for each set.

// not rendering the transient states
glBegin(GL_LINE_STRIP);
for ( unsigned int set = 0; set < NUM_OF_SETS; set++ )
{
glVertex2f(dataPoints[set][0][0], dataPoints[set][0][1]);
}
glEnd();

glBegin(GL_LINE_STRIP);
for ( unsigned int set = 0; set < NUM_OF_SETS; set++ )
{
glVertex2f(dataPoints[set][(int)NUM_OF_ITT-1][0], dataPoints[set][(int)NUM_OF_ITT-][1]);
}
glEnd();
I don't know what I am doing wrong. I am sorry, but I really have no background education of dynamic systems (being a high school senior). I just love math/realtime computer graphics. Further help would be much appreciated.

Mentor
P: 21,009

## Rendering the Feigenbaum tree

Minor points
Why is NUM_OF_ITT defined as a floating point value?
#define NUM_OF_ITT  100.0 // number of itterations
This doesn't make sense, especially since you have to cast it to int when you declare your dataPoints array.

BTW, iteration doesn't have any double t's in it, which makes NUM_OF_ITT a bit more difficult to understand than it needs to be.
P: 68
 Quote by Mark44 Minor points Why is NUM_OF_ITT defined as a floating point value? #define NUM_OF_ITT 100.0 // number of itterations This doesn't make sense, especially since you have to cast it to int when you declare your dataPoints array. BTW, iteration doesn't have any double t's in it, which makes NUM_OF_ITT a bit more difficult to understand than it needs to be.
I thought I'd have to do some arithmetic later that'd involve casting the constant to a floating point later on, and I was always bad at spelling ;)

In any case, you can be rest assured that neither of the aforementioned minor programming inconveniences are the cause of the issue. Do you have any other ideas :) ?
 Mentor P: 21,009 I wasn't able to open the first link you posted. I was able to open the planetmath.org link, however. Another (and clearer, IMO) explanation that seems closer to what you're trying to do is here: http://mathforum.org/advanced/robertd/bifurcation.html The bifurcation graph represents the results of repeated calculating $\mu$x(1 - x) for values of $\mu$ and x. (You're using k for $\mu$ and y for x.) It works by picking a value of $\mu$, and a starting value of x. The next value of x is obtained by substituting it into the expression $\mu$x(1 - x). For some values of $\mu$ the iterated values converge to a specific number. For example, if $\mu$ is 2 and we start with x = .25 and calculate 2*.25*.75, the next x value will be .375. Putting this value in for x, the next x value will be .46875. After a few steps you get values that keep repeating, .5, .5, .5. So one point on the graph you're trying to get will be (2, .5). Similarly, when $\mu$ is 2.5, the iterations settle down to .6, so another point on the graph is (2.5, .6). For values of $\mu$ from 1 to about 3, iteration produces a single value. For larger values of $\mu$, iteration can produce two values, then four values, then eight values, and so on, and that's what gives you the bifurcations, or forks in the diagram. When $\mu$ is around 3.5, you start to get cycles of four different iterated values, so the bifurcation graph will show four different points for a while. Now maybe you know this or maybe not. Your code doesn't contain may comments, so it's hard to tell what you're doing, particularly with your three dimensional array. It would be nice to have a comment saying what each index in that array represents.
 P: 68 I am sorry :( I know it can be a nightmare trying to interpreted undocumented code, so I will try to explain to the best of my abilities what I am doing in the code: #define NUM_OF_IT 100 // number of iterations #define NUM_OF_SETS 1000 // number of ks that will be tested double xstart = 0.1; // x0 double dataPoints[NUM_OF_SETS][(int)NUM_OF_IT][2]; // The basic algorithm I am following is: 1) Pick some starting x value that is below 1.0. This is xstart in the program (0.1). 2) Pick some K <= 1 <= 4 3) Iterate using a for loop NUM_OF_IT times. In each iteration, use the logistics difference equation to calculate the y value, and store it. This y value will be used in the next iteration. For the first iteration, the y value is calculated as: y = K*xstart*(1-xstart). This is stored in the dataPoints array, which has this structure: dataPoints[index for data for some unique K][coordinate pair calculated in the ith iteration] = [k value (constant for this index), y value] 4) Render to the screen This will become clearer when taking a look at the following code:  // This is where the iteration takes place // inx is the index for the dataPoints array where the data for this unique k will be stored void calculatePlot(double k, unsigned int inx) { // this is the base case // all subsequent values of y will be calculated using this double result = k*xstart*(1.0-xstart); // dataPoints[inx][i][0] will always contain k dataPoints[inx][0][0]=k; dataPoints[inx][0][1]=result; for ( unsigned int i = 1; i < NUM_OF_IT; i++ ) { // the previous result is used to calculate the new value for y result = k * result*(1.0-result); dataPoints[inx][i][0]=k; dataPoints[inx][i][1]=result; // y is stored in the 1st element of the 3rd // dimension of dataPoints array } } void initRendering() { // standard OpenGl initialization. glEnable(GL_DEPTH_TEST); glLineWidth(1.0); // Here is where the dataPoints array is actually populated with data. // the first k we use is 1. Then we increase k by a constant quantity // so the final k is <= 4 double k = 1; double kstep = (4.0-k)/(double)NUM_OF_SETS; for ( unsigned int i = 0; i < NUM_OF_SETS; i++ ) { // The data for this particular k is stored in the ith element of the // first dimension of the dataPoints array calculatePlot(k, i); // advance k by the constant quantity k+=kstep; } }  // From what I have seen, some implementations only render the // x-y values of the first iteration and the final iteration. This is what I have // done here. glBegin(GL_LINE_STRIP); // pseudocode: "for each value of K, render the point calculated in the first iteration" for ( unsigned int set = 0; set < NUM_OF_SETS; set++ ) { glVertex2f(dataPoints[set][0][0], dataPoints[set][0][1]); } glEnd(); // pseudocode: "for each value of K, render the point calculated in the last iteration" glBegin(GL_LINE_STRIP); for ( unsigned int set = 0; set < NUM_OF_SETS; set++ ) { glVertex2f(dataPoints[set][(int)NUM_OF_IT-1][0], dataPoints[set [(int)NUM_OF_IT-1][1]); } glEnd(); I hope this was enlightening. I am experimenting locally, hopefully I'll come up with a working solution soon. In the meantime, and help will be very much appreciated.
 P: 68 I am the biggest idiot in the world. As opposed to drawing lines, I should have simply drawn points. The structure seemed correct because it was. There was simply extra information there. I hope I am making sense. Thanks for the help all. Attached picture + the working source below.  #include #include #include #include #include #define NUM_OF_ITT 1000.0 // number of itterations #define NUM_OF_SETS 1000 // number of ks that will be tested double xstart = 0.1; // x0 double dataPoints[NUM_OF_SETS][(int)NUM_OF_ITT][2]; // void calculatePlot(double k, unsigned int inx) { double result = k*xstart*(1.0-xstart); dataPoints[inx][0][0]=k; dataPoints[inx][0][1]=result; for ( unsigned int i = 1; i < NUM_OF_ITT; i++ ) { result = k * result*(1.0-result); dataPoints[inx][i][0]=k; dataPoints[inx][i][1]=result; } } void handleKeyPress(unsigned char key, int x, int y) { switch (key) { case 27: exit(0); } } void initRendering() { glEnable(GL_DEPTH_TEST); glLineWidth(1.0); glPointSize(1.0); double k = 1; double kstep = (4.0-k)/(double)NUM_OF_SETS; for ( unsigned int i = 0; i < NUM_OF_SETS; i++ ) { calculatePlot(k, i); k+=kstep; } } void handleResize(int q, int z) { glViewport(0, 0, q, z); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, (double)q / (double)z, 1.0, 200.0); } void drawScene() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glTranslatef(-3, 0, -5); glColor3f(0.0, 1.0, 0.0); for ( unsigned int set = 0; set < NUM_OF_SETS; set++ ) { glBegin(GL_POINTS); for ( unsigned int i = 0; i < NUM_OF_ITT; i++ ) { glVertex2f(dataPoints[set][i][0], dataPoints[set][i][1]); } glEnd(); } glutSwapBuffers(); } void update(int value) { glutTimerFunc(10, update, 0); } void idlefunc() { glutPostRedisplay(); } void mouse(int button, int state, int x, int y) { } int main(int argc, char **argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB |GLUT_DEPTH); glutInitWindowSize(400, 400); glutCreateWindow("Logistcs"); initRendering(); glutDisplayFunc(drawScene); glutIdleFunc(idlefunc); glutKeyboardFunc(handleKeyPress); glutReshapeFunc(handleResize); glutMouseFunc(mouse); glutTimerFunc(100, update, 0); glutMainLoop(); return 0; } Attached Thumbnails
 Engineering Sci Advisor HW Helper Thanks P: 6,348 Well, it's not quite right yet, because you have those strange lines from the bottom left hand corner. They are the points when you "start up" the iteration from x = 0.1, until it settles down. If you change 0.1 to a differrent value (say 0.9) you will get a different pattern of those lines. To get rid of them completely, change the line in drawscene() from for ( unsigned int i = 0; i < NUM_OF_ITT; i++ ) to something like for ( unsigned int i = NUM_OF_ITT/2; i < NUM_OF_ITT; i++ ) to skip over the first points you calculate.
Engineering