Star maps using Blender

  • Thread starter Thread starter Janus
  • Start date Start date
AI Thread Summary
Blender's recent version 4.5 introduced a .csv importer for Geometry nodes, enhancing 3D modeling capabilities by allowing users to import data for celestial objects, such as Right Ascensions and Declinations, to create detailed star maps. The VizieR database provides extensive catalogs of celestial data that can be customized and exported as .csv files for use in Blender. Users can visualize different spectral classes of stars and control their appearance based on attributes like visual magnitude. The discussion also explored the potential of using these models to simulate how constellations would appear from different stars, noting the challenges of visibility and distance. Overall, the new features in Blender open exciting possibilities for astronomical visualization and modeling.
Janus
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
Gold Member
Messages
3,744
Reaction score
1,929
Blender just recently dropped a new version, 4.5(with 5.0 on the horizon), and within it was a new feature for which I immediately thought of a use for. The new feature was a .csv importer for Geometry nodes. Geometry nodes are a method of modelling that uses a node tree to create 3D models which offers more flexibility than straight modeling does. The .csv importer node allows you to bring in a .csv file and use the data in it to control aspects of your model. So for example, if you had a list of Right Ascensions, Declinations, and distances of celestial objects, you can use this to create a 3D map of them. Each value can be pulled out as an attribute node and used to define your model.
As luck would have it, there is a site called VisizeR that has a ton of catalogues of such objects, which can be searched and shown in table form. These tables can then be copy and pasted into a spreadsheet program and saved as a .csv file.
These VizieR tables can also be modified to show only the data you are interested in, and constrained by limits. For example, you could have a table list only G class stars out to 100 lys.
Such a map for all stars out to 100 ly is shown here:
stars_100ly.webp

This is an overlay of separate models. One for each spectral class of star, with each class given its correct Black body temperature color. This is useful if, for instance, you wanted to only show a particular class of star, which can be done by simple hiding the models for the other classes. If this particular file also had the visual magnitudes, you can create an attribute node for this, and then use it to control whether a star appears in the model, or possibly the visual appearance of the star( like the size representing visual magnitude).
The above image, being a still one, doesn't quite do the model justice, as the three dimensionality of it isn't apparent. For that, an animation is more appropriate.
This YouTube video of three different models gives a better showing. The 1st model is of all stars out to 175 parsecs (About the distance to Polaris), the 2nd is of DCEP Cepheid stars using the z red-shift for distances, and the 3rd is galaxy clusters detected by X-rays.


So far, I've only scraped the surface of what can be done with this new tool, and it's going to be fun to figure out other uses for it.
 
  • Like
Likes difalcojr, collinsmark, Greg Bernhardt and 4 others
Technology news on Phys.org
Janus said:
Blender just recently dropped a new version, 4.5(with 5.0 on the horizon), and within it was a new feature for which I immediately thought of a use for. The new feature was a .csv importer for Geometry nodes. Geometry nodes are a method of modelling that uses a node tree to create 3D models which offers more flexibility than straight modeling does. The .csv importer node allows you to bring in a .csv file and use the data in it to control aspects of your model. So for example, if you had a list of Right Ascensions, Declinations, and distances of celestial objects, you can use this to create a 3D map of them. Each value can be pulled out as an attribute node and used to define your model.
As luck would have it, there is a site called VisizeR that has a ton of catalogues of such objects, which can be searched and shown in table form. These tables can then be copy and pasted into a spreadsheet program and saved as a .csv file.
These VizieR tables can also be modified to show only the data you are interested in, and constrained by limits. For example, you could have a table list only G class stars out to 100 lys.
Such a map for all stars out to 100 ly is shown here:
View attachment 366389
This is an overlay of separate models. One for each spectral class of star, with each class given its correct Black body temperature color. This is useful if, for instance, you wanted to only show a particular class of star, which can be done by simple hiding the models for the other classes. If this particular file also had the visual magnitudes, you can create an attribute node for this, and then use it to control whether a star appears in the model, or possibly the visual appearance of the star( like the size representing visual magnitude).
The above image, being a still one, doesn't quite do the model justice, as the three dimensionality of it isn't apparent. For that, an animation is more appropriate.
This YouTube video of three different models gives a better showing. The 1st model is of all stars out to 175 parsecs (About the distance to Polaris), the 2nd is of DCEP Cepheid stars using the z red-shift for distances, and the 3rd is galaxy clusters detected by X-rays.


So far, I've only scraped the surface of what can be done with this new tool, and it's going to be fun to figure out other uses for it.

Nice. So given a 3D star map, could one find out what the constellations look like from different stars? Like, which constellation would the sun be in as seen from Alpha Centauri?
 
AlexB23 said:
Nice. So given a 3D star map, could one find out what the constellations look like from different stars? Like, which constellation would the sun be in as seen from Alpha Centauri?
Technically I could be done, though some adjustments would likely be needed to be made. For Alpha C, you should just be able to move the camera to its position and point it at our Sun in the model. For stars significantly further, it would take some additional work. Let's put it this way: Our own Sun would only be visible to the unaided eye at a maximum of ~50 ly. So from further than that you wouldn't even see the Sun without a telescope. Or take Eta Cassiopeiae, which from Earth is visible as a star with a magnitude of 3.48. If we were viewing our Sun from a direction so that Cassiopeiae was in the background, At a certain distance, this star would drop below visibility and no longer be part of the visible constellation. If you want a really accurate of what things would look like from a given point in space, you'd have to work out what stars you could see and which you couldn't. In addition, there are a ton of smaller red dwarf stars, that we don't see, but might be visible from another star system and between us and that star. in this case they would see an additional star besides our Sun in the background constellation.
 
  • Like
  • Informative
Likes OmCheeto, jedishrfu and AlexB23
Janus said:
Technically I could be done, though some adjustments would likely be needed to be made. For Alpha C, you should just be able to move the camera to its position and point it at our Sun in the model. For stars significantly further, it would take some additional work. Let's put it this way: Our own Sun would only be visible to the unaided eye at a maximum of ~50 ly. So from further than that you wouldn't even see the Sun without a telescope. Or take Eta Cassiopeiae, which from Earth is visible as a star with a magnitude of 3.48. If we were viewing our Sun from a direction so that Cassiopeiae was in the background, At a certain distance, this star would drop below visibility and no longer be part of the visible constellation. If you want a really accurate of what things would look like from a given point in space, you'd have to work out what stars you could see and which you couldn't. In addition, there are a ton of smaller red dwarf stars, that we don't see, but might be visible from another star system and between us and that star. in this case they would see an additional star besides our Sun in the background constellation.
Fascinating stuff. Once everything is set up, make some views from different stars within say 20 light years of the sun.
 
Janus said:
Technically I could be done, though some adjustments would likely be needed to be made.
Also, our angular coordinate information is very precise, but IIRC our distance estimates have some pretty large error bars (Betelgeuse, for example, is between 408 and 548 ly away). So our estimates of the positions of stars on somebody else's night sky elsewhere would quickly become very unreliable as you move away from Earth.
 
  • Like
  • Agree
Likes jedishrfu and AlexB23
This is pretty cool. I was always fascinated by those galactic and superckuster maps in modern astronomy books and would at them in wonderment.
 
A couple more examples. The first incorporates proper motion data to show how Ursa Major would change over time. I included other visible stars and their proper motions as well. The second touches on the change of view due to position and shows how the constellation would change as the viewing angle changes.
 
  • Like
Likes collinsmark
Janus said:
A couple more examples. The first incorporates proper motion data to show how Ursa Major would change over time. I included other visible stars and their proper motions as well. The second touches on the change of view due to position and shows how the constellation would change as the viewing angle changes.

This looks nice. Maybe labeling the brightest stars and also putting a timeline with year display HUD in the corner could give people a sense of time passing.
 
  • Like
Likes collinsmark
Janus said:
Blender just recently dropped a new version, 4.5(with 5.0 on the horizon), and within it was a new feature for which I immediately thought of a use for. The new feature was a .csv importer for Geometry nodes. Geometry nodes are a method of modelling that uses a node tree to create 3D models which offers more flexibility than straight modeling does. The .csv importer node allows you to bring in a .csv file and use the data in it to control aspects of your model. So for example, if you had a list of Right Ascensions, Declinations, and distances of celestial objects, you can use this to create a 3D map of them. Each value can be pulled out as an attribute node and used to define your model.
As luck would have it, there is a site called VisizeR that has a ton of catalogues of such objects, which can be searched and shown in table form. These tables can then be copy and pasted into a spreadsheet program and saved as a .csv file.
These VizieR tables can also be modified to show only the data you are interested in, and constrained by limits. For example, you could have a table list only G class stars out to 100 lys.
Such a map for all stars out to 100 ly is shown here:
View attachment 366389
This is an overlay of separate models. One for each spectral class of star, with each class given its correct Black body temperature color. This is useful if, for instance, you wanted to only show a particular class of star, which can be done by simple hiding the models for the other classes. If this particular file also had the visual magnitudes, you can create an attribute node for this, and then use it to control whether a star appears in the model, or possibly the visual appearance of the star( like the size representing visual magnitude).
The above image, being a still one, doesn't quite do the model justice, as the three dimensionality of it isn't apparent. For that, an animation is more appropriate.
This YouTube video of three different models gives a better showing. The 1st model is of all stars out to 175 parsecs (About the distance to Polaris), the 2nd is of DCEP Cepheid stars using the z red-shift for distances, and the 3rd is galaxy clusters detected by X-rays.


So far, I've only scraped the surface of what can be done with this new tool, and it's going to be fun to figure out other uses for it.

Very interesting. It reminded me of what my company did in the 80's. We needed to take an optical image of a fingerprint, compress it, encrypt it, and send it to AFIS (Automated Fingerprint Identification System) at the FBI, Washington DC, and get a response in real time. The fastest chips commercially available were the 8888 Intel chips. Not fast enough (understatement). So, we encrypted the analog print as a fractal, before transmission. Fractals encode the same "information", regardless of 2D size. It showed prints in 3D also. "Expand" the fractal, and you see intricate nuance's. Compressed, the print had a smaller signal volume, until the "package" was open at AFIS. Same info though. Also compartmented in a sea of other communications traffic. A bit like steganography, where the "dominant image" was the "coat" is was packaged in. You seem to use geography, like we used fractals; to see more, with less.

Trivia note: When you use a credit card, and the bank gets your info, and sends it back to the merchant securely in seconds, my company helped make that possible.
 
  • Like
Likes neilparker62
  • #10
Janus said:
...

So far, I've only scraped the surface of what can be done with this new tool, and it's going to be fun to figure out other uses for it.
Any idea how difficult it would be to trace the path of 3i/Atlas back in time?
 
  • #11
OmCheeto said:
Any idea how difficult it would be to trace the path of 3i/Atlas back in time?
That would be a bit beyond this particular application's ability. First off, you'd need accurate values for all the orbital elements of its trajectory through the Solar system. The resulting accuracy of this trajectory would depend on how accurate our values for those elements are. To track an object on that trajectory over time is not easy either, as there is no direct numerical solution, but you need to do a series of iterations (taking the output of an equation and feeding it back into the equation over and over) and the accuracy of this depends on how many iterations you perform.
 
  • Like
  • Sad
Likes AlexB23, OmCheeto and collinsmark
  • #12
OmCheeto said:
Any idea how difficult it would be to trace the path of 3i/Atlas back in time?

That might be very difficult as it appeared to do dramatic course corrections over time as if someone was controlling it or if it has peculiar internal architecture like an embedded strong magnet.
 
  • #13
AlexB23 said:
Fascinating stuff. Once everything is set up, make some views from different stars within say 20 light years of the sun.
Since moving just a few light years would not make much of a difference, I decided to make this animation. It starts from 5 ly out and then recedes. I chose Ursa Major as the background constellation again, as it is one of the most recognizable. Since the original data didn't include the Sun, I added it as the small orange dot in the center of the image. It disappears when the animation reaches a distance beyond which it would no longer be visible by the unaided eye. It took a bit of fiddling to get things to look somewhat reasonable while still having stars vanish when they should as they dropped below visibility. Since the data I was working from didn't include the stars' absolute magnitudes, I had to work it out from their apparent magnitudes as seen from Earth, and then, from that, work out the apparent magnitude as seen from the receding camera.

 
  • Like
Likes AlexB23 and OmCheeto
  • #14
Janus said:
Since moving just a few light years would not make much of a difference, I decided to make this animation. It starts from 5 ly out and then recedes. I chose Ursa Major as the background constellation again, as it is one of the most recognizable. Since the original data didn't include the Sun, I added it as the small orange dot in the center of the image. It disappears when the animation reaches a distance beyond which it would no longer be visible by the unaided eye. It took a bit of fiddling to get things to look somewhat reasonable while still having stars vanish when they should as they dropped below visibility. Since the data I was working from didn't include the stars' absolute magnitudes, I had to work it out from their apparent magnitudes as seen from Earth, and then, from that, work out the apparent magnitude as seen from the receding camera.


Good job, man. You are getting places with your star map. Next step would be to color the stars according to the spectral class.
 
  • #15
AlexB23 said:
This looks nice. Maybe labeling the brightest stars and also putting a timeline with year display HUD in the corner could give people a sense of time passing.
As per your request:


Getting the text to look proper for the second part took some finagling. The first part was pretty straight- forward. Basically it meant making an individual CSV file which only had a single row of information which related to the star I was labeling. Then using that file in the node set up and replacing the "star" with an instance of the text object for the label. Each label required its own geo-node set up. (praise be copy and paste!) For the second part, it was a bit different, As I had to make sure that the text instances rotated to keep facing the camera and since they would also be changing their distance from the camera, scale properly so their apparent size wouldn't change in the view. Here is the overall geo-node set up for a text object.
textgeo.webp

The red box at the top is the import of the CSV file. The red boxes on the left reference the right ascension, declination and parallax information. The collection of blue boxes to the right of them are the math nodes that convert these into separate x, y, and z coordinates.
Here's a blowup of the black box:
text_box.webp

The separate xyx coordinates are combined into a single vector output. You have two object info nodes, one for the camera and one for the text object. The location of the camera, along without the vector output we got earlier are fed into a vector math mode, which outputs the distance value between the text object instance and the camera. This in turn, is used to control the scale of the label we see, increasing or decreasing it as the distance increases or decreases, keeping the visual size constant in the camera view. The object info for the text object uses two outputs, the Geometry output to Instance input tells the set up that this is the reference object to be used to make the "label" instance. The rotation output tells the instance to keep the same rotation as the text object. If I now go outside of the geo-node set up and to the objects themselves (camera and text object), I can "parent" the text object to the camera. This way, the text object "follows" the camera, mimicking its motion and rotation, which, in turn assures that the label instance always keeps the correct orientation to the camera. If you're wondering why I didn't just feed the rotation info of the camera into the rotation of the Instance on Points node, that would have put the axi of both pointing in the same direction, meaning the camera would be looking at the "backside" of the label. The z axis of the text needed to point at the camera for the text to look correct.
 
  • #16
Janus said:
As per your request:


Getting the text to look proper for the second part took some finagling. The first part was pretty straight- forward. Basically it meant making an individual CSV file which only had a single row of information which related to the star I was labeling. Then using that file in the node set up and replacing the "star" with an instance of the text object for the label. Each label required its own geo-node set up. (praise be copy and paste!) For the second part, it was a bit different, As I had to make sure that the text instances rotated to keep facing the camera and since they would also be changing their distance from the camera, scale properly so their apparent size wouldn't change in the view. Here is the overall geo-node set up for a text object.
View attachment 366558
The red box at the top is the import of the CSV file. The red boxes on the left reference the right ascension, declination and parallax information. The collection of blue boxes to the right of them are the math nodes that convert these into separate x, y, and z coordinates.
Here's a blowup of the black box:
View attachment 366559
The separate xyx coordinates are combined into a single vector output. You have two object info nodes, one for the camera and one for the text object. The location of the camera, along without the vector output we got earlier are fed into a vector math mode, which outputs the distance value between the text object instance and the camera. This in turn, is used to control the scale of the label we see, increasing or decreasing it as the distance increases or decreases, keeping the visual size constant in the camera view. The object info for the text object uses two outputs, the Geometry output to Instance input tells the set up that this is the reference object to be used to make the "label" instance. The rotation output tells the instance to keep the same rotation as the text object. If I now go outside of the geo-node set up and to the objects themselves (camera and text object), I can "parent" the text object to the camera. This way, the text object "follows" the camera, mimicking its motion and rotation, which, in turn assures that the label instance always keeps the correct orientation to the camera. If you're wondering why I didn't just feed the rotation info of the camera into the rotation of the Instance on Points node, that would have put the axi of both pointing in the same direction, meaning the camera would be looking at the "backside" of the label. The z axis of the text needed to point at the camera for the text to look correct.

You are speaking Greek to me, but I understand that this looks good. Dubhe is moving quickly compared to the other stars.
 

Similar threads

Back
Top