The difference a new computer makes for 3D rendering

In summary: Well, if I add a pressure plane and then set the pressure to "high", the cube will start to deform. As you can see, the edges are starting to curl inward, and the corners are getting squished. In summary, the new computer allowed me to improve my old renders.
  • #1
Janus
Staff Emeritus
Science Advisor
Insights Author
Gold Member
3,718
1,817
Just recently, I finally upgraded my 7+ year old computer. I was able to find a good deal on a HP Omen, with a Geforce RTX 2060 graphics card.
So, not only was I able to upload the latest build of Blender, but I was able to go back and tweak some renders I had done with the old computer.
For example, I had done a render of the robot from Lost in Space a while back. And while it wasn't necessarily a bad render, it could have used some extra work. The problem was, that with my old computer, to do the type of material adjustments, etc. that would give the render that something little more required doing test renders in order to see the end result, and this could take a long time on the old system. This made it so that I just wasn't that enthusiastic about making those small adjustments.
Below is the newer version.
B9_2.png


The robot model itself is unchanged, but the textures have been tweaked. The scene itself has also has been filled out a bit. The original just had the crags on the horizon. The tree/shrub, foreground rocks and ground cover are new. I also added a bit of focal blur, and a "haze" to give the scene a bit more depth.
 
  • Like
Likes hutchphd, Oldman too, hmmm27 and 9 others
Computer science news on Phys.org
  • #2
Here's a bit of a more dramatic example.
Here is a render done of the LEM on my old computer using Blender 2.78. This was done using the default Blender render engine.
lem_old.png


and this is it done using the "cycles" render engine. on the same machine.
LEM_CYCLES.png

Better, but the rendering time went way up, so while this could still use improvement, It wasn't something I was too enthusiastic to go back and adjust.
LEM_3.png

But with the new computer, I was able to modify the entire scene to this and with a full render time of only 10 min.
One note is that I changed the Moon's "landscape". With the old one, there where some subdivision issues that caused noticeable flaws in the new texturing/displacement I wanted to apply to it.
 
  • Like
  • Love
  • Wow
Likes hutchphd, Oldman too, Tea Monster and 6 others
  • #3
Very nice images!
 
  • Like
Likes sysprog
  • #4
Fascinating change of shadow definitions in the three renditions.

The first image shows sharp delineations of the LEM's shadow on the lunar surface reminiscent of actual photographs. In the second image the shadows appear fuzzy, the lunar surface more 'snow' like, as if there was a brief dust cloud obscuring sunlight. The final rendition combines these effects with more perceived depth and a quite realistic surface beneath the LEM.

The boulders add historical as well as artistic, interest. Seem to remember Neil Armstrong manually redirecting the LEM to avoid boulders at the preferred landing site.
 
  • Like
Likes sysprog
  • #5
Klystron said:
Fascinating change of shadow definitions in the three renditions.

The first image shows sharp delineations of the LEM's shadow on the lunar surface reminiscent of actual photographs. In the second image the shadows appear fuzzy, the lunar surface more 'snow' like, as if there was a brief dust cloud obscuring sunlight. The final rendition combines these effects with more perceived depth and a quite realistic surface beneath the LEM.

The boulders add historical as well as artistic, interest. Seem to remember Neil Armstrong manually redirecting the LEM to avoid boulders at the preferred landing site.
With the first render, the Render engine did not take into consideration the "size" of the light source.
With the second one, it did, leading to "softer" shadowing, but I used the "default" size for the "Sun" which lead to too much softening.
I tried to strike a balance in the third, by reducing the angular size of the Sun. Lighting is always a huge factor with renders, and sometimes it comes down to weighing realism against art.
 
  • Like
Likes sysprog and Klystron
  • #6
While exploring some of the new features of the most recent version of Blender, I came across this cool additional option for one of the physics modifiers.
The modifier is the "cloth" modifier, which allows you to simulate the behavior of cloth. It can be used to create a waving flag, or a table cloth draped over a table, etc.
The new option is "pressure".
To explain what this does, we'll start with a simple cube
cube.png

Though you can't see it here, this one has it sides divided up into 20x20 arrays of "faces". You need to subdivide the object up like this in order to give the cloth modifier something to work with in terms of deforming the cube. I also have already applied the cloth modifier, but again, you don't see this, as the cube has nothing to interact with.
If I were to add a collision plane below the cube, and then let the animation run a bit, the cube would collaspe onto the plane as if it were an empty cube shaped bag made of cloth.
But the new option allows you to give the "bag" an internal "pressure", as if it were pumped full of air, and you can vary the value of this pressure.
So, if I take our cloth cube, and raise the pressure value up, you can get this:

cube_inflate.png

The cube bulges against the internal pressure (there are ways to make this better, such as starting with a cube that has rounded edges and corners ( even if just slightly), but this is just a simple example.
If I were to allow this object to fall onto the plane, it would give a bit, and maybe partially collapse (depending on the pressure value).
This opens up a slew of possible things to play around with.
The first thing that occurred to me was suggested by the fact that the pressure setting can be controlled within an animation, which inspired me to try this:
balloon2.gif

Surprisingly, it didn't take a lot of time to get something that looked halfway decent.
The "balloon letters" were made by starting with a text object. Extruding it to make it 3 dimensional, then adding a rounded bevel to remove straight edges and corners.
Convert the text object into a mesh object (the cube we used earlier was also a mesh object). Apply a re-mesh modifier ( This more or less rearranges how the object's faces are arranged. This is important, because the assigned faces created in the text to mesh conversion would give really odd creases and folds in our "cloth". )
Create some "pin points". These are parts of the mesh that will remain fixed, and non-moving in the animation and act as "anchors" for our letters.
Assign the cloth modifier, with "pinning" enabled and your selected points assigned as pins.
Put a collision plane under the mesh.

Now, on to animating:
Make sure pressure is set to 0 in the cloth settings.
Let the animation run for a number of frames, to allow the cloth letters to settle down onto the plane.
Stop the animation. With pressure still at 0, you assign a "key frame" to the pressure value. ( this assures that the pressure value remains 0 up until this point. Move ahead in the animation by the amount of time over which you want the letters to inflate. Turn up the pressure value (I used 180) And set a new key frame. (between the last key frame and this one, the pressure value will ramp up from 0 to 180, 'inflating' the letters. )

Now, we need one more thing. Left like this, the letters will inflate, but they won't fully stand up. So I added a "wind" force field. This, as the name suggests, simulates a blowing wind. In this case, I orient it so that the wind blows upward. You set key frames up for it so that you have zero wind while our letters settle to the plane, and the the wind picks up when the letters inflate. The upward acting "wind" makes the letters stand upright.

Choosing the right pressure and wind values is by trial and error. Pick a setting, see how the animation behaves, adjust the value, see how it changes, ...
Now we just render the animation by selecting to only render those frames from just before inflation starts to somewhat after full inflation.
I did give the letters a metallic, Mylar like material, So I used a HDRI as the background to give them something varied to reflect.
And that about covers it. There are some other details I didn't include, but I didn't want to put everyone to sleep.
 
  • Like
  • Wow
  • Love
Likes Tea Monster, DrClaude, Astronuc and 7 others
  • #7
Umm... Instead of collapsing, shouldn't that cube approximate a sphere as the internal pressure increases? Or was that external pressure that increase?

OK, with that nit picked, NEATO!
 
  • Like
Likes sysprog
  • #8
Tom.G said:
Umm... Instead of collapsing, shouldn't that cube approximate a sphere as the internal pressure increases? Or was that external pressure that increase?

OK, with that nit picked, NEATO!
It is bulging at the sides. The pull in effect is due to the model trying to hold to a constant surface area. Imagine a line going all the way around the cube at the midpoint. If you try to reshape it as as circle without changing its perimeter, the corners will be drawn in towards the center.
 
  • #9
Those are nice models. The update looks particularly good.
Now that you have the nice new machine with the RTX card, you could set cycles to render with the GPU and save a lot of time.

Also, a good settings for space is to set your world colour to black and use a sun light as your only light, and set the strength to 3 or so.
 
  • Like
Likes berkeman
  • #10
Tea Monster said:
Those are nice models. The update looks particularly good.
thanks
Now that you have the nice new machine with the RTX card, you could set cycles to render with the GPU and save a lot of time.
Already done. It has really been a great help
Also, a good settings for space is to set your world colour to black and use a sun light as your only light, and set the strength to 3 or so.
One of the areas where the faster render times is a boon. I'm much more inclined to fiddle with the lighting when I don't have to wait so long for the renders.
 
Last edited by a moderator:
  • Like
Likes berkeman
  • #11
A short video of a recreated shot from the first Star Wars movie. I just finished doing the Vader TIE fighter model, and the other TIE fighters are slightly modified models I had done a while back.
It makes use of motion blur with the camera so you get that smearing effect for the trench. I also added a bit of noise to the camera position and tilt to simulate "camera shake".
tie3.gif


It's about twenty frames long, and only took several minutes to render all the frames.
 
  • Like
Likes Tea Monster
  • #12
today i would like to know if it's possible to transfer saved and edited projects from one computer to another. and which graphic will GTA render to the exportation?

because to be more precise, the main idea that I had in mind was to record a bunch of scenes, edit them on my crappy computer, and then export them and render those with a better graphic on a more powerful computer of a friend.

so the main idea would be to use the more powerful computer for rendering due to its superior graphic card ( 3070. )

is that possible?

or it will be rendered with the graphic that has been shot ( crappy computer )?

( P.S. no, I can't work on the powerful computer to the get-go for a few reasons. the first one being, I don't own that computer. second of all, for the first reason, ( which it's kinda linked ) I can't use it that much because my friend it's busy working on others aspects. so that's why I'm recording and editing stuff on my end. so that he can work on something else.)
 
Last edited:
  • #13
Your friends computer would have to have the same 3d software as you have ( for instance, blender).
Then once you are done, you can just export the blender file for your scene( and any other file; image etc) to his computer t do the render.
You will have make sure the render settings are changed to get a better render. For instance, you might use a low sample rate on your system in order to do a low-res render on your computer, and you would need to reset that to get a better render on his computer.
One disadvantage I can see from doing things this way is that the appearance of materials can change when you change the quality of the render. I've had instances where something looked pleasing at a lower resolution, but when I went higher, it looked awful. If you are doing the final render on a different computer than you are doing the editing, then this could be a headache.
 
  • #14
Janus said:
Your friends computer would have to have the same 3d software as you have ( for instance, blender).
Then once you are done, you can just export the blender file for your scene( and any other file; image etc) to his computer t do the render.
You will have make sure the render settings are changed to get a better render. For instance, you might use a low sample rate on your system in order to do a low-res render on your computer, and you would need to reset that to get a better render on his computer.
One disadvantage I can see from doing things this way is that the appearance of materials can change when you change the quality of the render. I've had instances where something looked pleasing at a lower resolution, but when I went higher, it looked awful. If you are doing the final render on a different computer than you are doing the editing, then this could be a headache.
thanks for your response.
 
  • #15
Just an example of what you can achieve with realtime rendering nowadays. With physically based materials and realtime raytracing, you can achieve some spectacular results.
 

Attachments

  • tbrender_Viewport_017.png
    tbrender_Viewport_017.png
    65.4 KB · Views: 100
  • tbrender_Viewport_016.png
    tbrender_Viewport_016.png
    27.3 KB · Views: 104
  • tbrender_Viewport_010.png
    tbrender_Viewport_010.png
    26.4 KB · Views: 98
  • tbrender_Viewport_007.png
    tbrender_Viewport_007.png
    38.8 KB · Views: 101
  • #16
So, Just a bit back Blender released a new version. Along with some performance improvements came some helpful new features. One in particular is the "shadow catcher pass"
It has a number of uses, but one in particular is very helpful in certain situations.
This is when you are using an HDRI in your render. At the very basic level, an HDRI gives you a background n which you can place your modeled objects. But it goes beyond that.
1. It is panoramic. If you place a camera in your scene, and then tilt or pan it, the background rendered in the camera changes appropriately. If you place a reflective object in it, that object will accurately reflect everything as if it was actually in that environment.
2. It contains lighting information. If you place an modeled object into a scene with an HDRI, it will be lit by the HDRI in a way that matches the way the HDRI scene itself is lit.

This is really nice if you want to put your model into a life-like scene.

However, there is one drawback. The HDRI is still just a background. You can place an model so that it looks like it is sitting on the ground, but while it will be lit and shaded properly, it won't cast a shadow on that "ground", and the result looks like a bad photo shop.

I played around with ideas to how to get around this limitation.

My first attempt was to set up the scene with a white plane where the ground was, have the model cast a shadow onto it, take the rendered image into a paint program and carefully erase everything in the scene but the shadow. Re-render the scene without the plane, and save that image. Take the two images and layer them together, with the "shadow" image on top, with its transparency level adjusted so that it darkens the image just the right amount where the shadow should be cast.

It worked, and gave a result that passed muster( if you didn't look too close)

Then I started to learn how to use the compositor in blender. This is a "node" tree that allows you to add various post-rendering effects to a scene. This included mixing together various inputs, including images.
This made the process a bit easier. It still involved rendering so that a shadow would be cast onto a white plane, but it didn't require me to carefully trace out the shadow like before, and also allowed me to layer the shadow effect directly in blender without having to do it with separate software.

However, This method had its drawbacks also. you needed everything in the "shadow alone" image that wasn't shadow to be pure white. If the lighting in the HDRI wasn't white light, it would color the plane. Also, sometimes the lighting would be noisy. In addition, the white plane couldn't always hide all of the HDRI background, so you still had to use a paint program to paint these parts of the image white.

All in all, while still better than the previous, method, it was still tedious in some respects. And if you decided afterwards that you weren't happy with the camera angle etc, you'd have to do a good deal of the process over again.

But now, due to the shadow catcher pass, things have gotten much better.
You still use a plane for the model to cast the shadow on. But now you click an option to make it a shadow catcher mask. You also go into the render settings and add a shadow catcher pass. What this does it cause Blender to do a second rendering pass, one that only captures the data of the shadow cast onto the plane. ( If you were to render an image from this, it would be an image that is white everywhere except where the shadow is cast. Exactly what we where trying to do before, but accomplished by simply clicking a box)

When you render the scene now, the model will render, but everywhere the shadow catching plane is, you get a transparency. You'll still see any of the HDRI background that isn't hidden by the plane, but this is easily remedied by making the plane wide, and adding a "back wall" to it to make sure all the background is hidden*

Now we add a second scene (There is a place to do this, and it also allows you to toggle back and forth between scenes.) The second scene only contains a camera and the HDRI background. (And you can just copy and paste these from the first scene.)

Now we open the compositor. It should already have a render input node assigned to the first scene. Add a second render input, and assign it to the second scene you created.

Each of these nodes have outputs on them. The second scene will have an image output, and a couple of others, which we don't need. The first scene has an additional output labeled shadow catcher, and this is how you access the shadow catcher pass.

Create a "mix" node, and set the type to multiply. Take the image output of the 2nd scene and the shadow catcher output of the first scene and feed them into the two image inputs of the mix node.
This takes the shadow pass image and uses it to control the darkness of the HDRI image. Everywhere the shadow pass is white, you just see the background unaltered. Everywhere is is not white darkens the background by an amount determined by how far from white it is.
Create a new mix node. by default it is set to mix, and that's what we want. Take the image output and feed it to one of the image inputs of the mix node. Take the image output of the first mix node and feed it into the other image input.
Take the output from the first scene labeled "alpha" and feed it to the factor input of the mix node. Alpha is a measure of transparency, so doing this tells the combined image, "Everywhere there is transparency in the first scene's image, put the background image, and everywhere it isn't, put the first scene's image.
Take the image output from this mix node and feed it to the output node.

Now you just render the scene.
Here's an example, using models of Gumby and Pokey that I had previously made, and an HDRI of Zhengyang gate (I used it because the lighting gives a good shadow, and the ground is a nice and flat.)

GUMBY_CHINA.png


Not only does this allow for a faster and easier method for creating this effect, but is so much easier to make changes. Before, if I wanted to move the models in the scene, change the camera angle, change the orientation of the HDRI, or change the HDRI entirely, I would have made the changes, rendered new shadow mask images and do the necessary alterations to them, put them into the compositor set up and re-render the image. Now I can just make the changes, and directly render the new result.
 
  • Like
Likes berkeman
  • #17
I use Poser, which has free option for a remote-render queue. Given the bit-miner / gamer driven crazy / stupid prices of high-end graphics cards, I realized it was cheaper and much more future-resistant to build an entire remote-render, CPU-only 'Box' than significantly upgrade either of my CAD-Tower's now-ageing twin GPU cards...

So, 'Box' has a Ryzen-7, 32 GB RAM, an old 'Office' GPU card left over from testing the CAD-Tower, and a 14" VGA display hung off end of desk, beneath left-most of the CAD-Tower's four displays. Sent render-jobs, 'Box' deploys 15¾ of its 16 threads, 95~~98% CPU resource, and runs like the wind...
 
  • #18
This is a really neat trick I just learned, and really not that labor intensive.

You start with a bezier curve object. You then edit it with the "draw" tool, by using the mouse to draw on the screen the path you want your curve to follow.
You don't have to be perfect, as you can then use control points of the curve to make adjustments.
Once you get it looking like you want, convert it to a mesh. (This has to be done in order to apply any of the following modifiers)
The first modifier is the "cloth" modifier. It basically makes objects behave as if they are made of cloth when animated. In this case, our curve will act like a string.
Now, we attack a "hook" to one end on the string. This is basically an object that this end of the string remains attached to and follows, pulling the rest of the string with it. Since we don't need (or want) this object to be visible, we use an "empty object".
Speaking of being visible, if we tried to do a render at this point, we wouldn't see anything. This is because our "string" is one dimensional. To fix this, we add a "skin" modifier, which gives it some thickness, which we can adjust to our needs. We can also now assign a material to it.
If we render now, we see our string following the curve we traced out earlier. If we run the animation, the string will droop, hanging and swinging from the stationary hook.
For this particular effect, we don't want this, instead, we want it to hold its shape until the hook moves. We do this by turning "gravity" off in the settings.
So for example, assume your string traces out a word in cursive. It will hold this shape until the Hook starts to move, then the word will "unravel" starting at one end, working its way down the word until the whole string is moving.
This in of itself, is a pretty neat effect, but we can improve on it even more. We can apply the "build" modifier. By setting a start frame, and the number of frames to apply the effect. The animation will build the string, segment by segment over that number of frames. And it does it in the same order as you orginally traced the curve with your mouse. You can animate the word being written down before it gets pulled off screen.
Okay, so that's the basic effect (and while what I wrote above does seem to be a lot, it's easier to do than explain).
And, by adding a few more elements ( like a "writing surface" and a pencil to "write" the words, you can do something like this:

 
  • Wow
Likes Algr
  • #19
That's neat! I actually downloaded blender, but haven't done much with it yet. (Tied up with other jobs.)
 
  • #20
And now, for my next trick:


I ran across this tutorial on YouTube that demonstrated how to make the growing spark-emitting circle effect they use in Dr. Strange for opening a portal.
So naturally, I wanted to see if I could use it to make a "fully functioning" portal animation.

Basically, what I wanted was for the portal to open up and show another location through the opening. Something comes through the portal, and then the portal closes behind it.
I tried a couple of different methods( some more successful than others) and this is what I finally came up with.


A link for the tutorial in included in the description of the Video.
(I did modify it a bit from the tutorial version, by having the "sparks" change color as they age, becoming more orange)

I also came up with a way to have the portal "cast light" onto the scene, which enhances the overall effect. This technique should come in handy with other projects.
 
  • Like
Likes berkeman and pbuk
  • #21
After much procrastination, I've finally decided to dip my toes into VFX ( Video effects) using Blender.
Up to now, I've settled with doing animations relying on static backgrounds for the scenes. And even though HDRIs allow for panning and tilting of the camera, there are limitations. The background scene itself is a still, and you can't move the camera from its position.
Putting a 3-D model into a video, particularly one with a moving camera is a challenge.
Luckily, Blender has a method of making a lot of the hard work less daunting.
It uses motion tracking, And here's the gist:
You start with the video you want to use. One nice thing is that it doesn't have to taken with special equipment and can just be one you've taken with your cell phone. You load the video into Blender. Go to frame 1 of the video, and place markers at different point of the image ( for this to work best these markers should be areas of high contrast). You need at least 8. Run the tracking tool. The video will run, the computer will try to Place markers on each frame, tracking the points you chose. ( this doesn't always go smoothly the first time around, but there are ways to fix things as you go along.
Once the tracking is done, you do a camera solve. The program will create a basic blender scene with a camera, one that move in the scene just like the camera you took the video with did.
There are some adjustments that allow you to align this scene with the video, and once that's done, you can put any 3-D model in the scene you want, effectively placing it into the video.
After a couple of false starts, I got a what I think is not too bad a result. Since this was just a test, I decided to just put some black marks on a paper to make tracking easier. The ship is a CGI 3-D model I had made previously.



I've still have some work to do, as the motion tracking between model and video is not perfect ( the fact that the ship is moving in the video helps hide this.), but just this little success has given me some ideas for future projects.
 
  • #22
I came across another tutorial for an effect in Blender. It is quite neat and involves "geometry nodes", a feature added to Blender just a few versions ago, and has been growing by leaps and bounds since.

It differs from the normal means of modeling in that it uses a "node tree" It has a number of useful aspects: One is that it allows you to do some things that would be extremely difficult or impossible to do otherwise.
Another is that it is "non-destructive". What this means is that you can undo or change something at any time or at any point of the modeling process, without having to undo everything you've done to the model since.

It's like if you build a house, and after you start framing it, you realize that the foundation is laid wrong. So you have to tear everything down, make the changes to the foundation and start framing again. With Geometry nodes, you can go to the node that controls the parameters of the "foundation", change them, and the framing you've already built will adjust to the new foundation.

The effect in question creates an "electric arc" that jumps to one object to another. You can set the maximum distance over which arc will jump. How many arcs will be produced at a time, etc.
Here's the video of what I did with it for a test. A link to the tutorial in included in the description for anyone interested. I'll also add an image of the entire node tree, but because of its size and complexity, the names of the nodes used and the settings aren't readable.
The node tree for the effect:
lightning_nodes.png
 
  • Like
Likes Jarvis323
  • #23
Sadly, Blender's quirky UI, like Daz Studio's, gives me a prompt migraine...
 
  • #24
Nik_2213 said:
Sadly, Blender's quirky UI, like Daz Studio's, gives me a prompt migraine...
Granted, it is steep learning curve, and I do find myself having to refresh my memory on how to do a particular thing if I haven't used that feature in a while( And then there's the fact that every few months a new updated version comes out, adding even more features.)
And I do understand what you mean about the UI; I had a devil of a time adjusting to it after having used other modeling programs.
 
  • Informative
Likes OmCheeto
  • #25
I found a new toy that's going to be interesting to play with.
AI and cloud computing has come to Blender.
You can now download an app for your phone, use it to snap a number of pictures at different angles of a object, upload them and after a few minutes get back a 3D model of the object, complete with textures.
( you get an an email with a link from which you download the model)
For example:
GODZILLA_SCAN.png

This is a rendering of a resultant model of a scan of a plastic Godzilla figure I had. The background is a HDRI.
Now you might be thinking that you could do something like this in Photoshop, so what's the big deal?
The difference is that this is a 3D mesh model, to which you can do whatever you could do to any other mesh. Such as making an armature rig, binding the mesh, and animating the armature to do something like this:
godzilla2.gif

Giving life to a rigid plastic figure.

I used the free version of the app, and it took just a couple of attempts to get a usable set of photos (proper soft lighting with no harsh shadows is the key).
The higher end paid version has some extra bells and whistles. Such a a higher limit for the number of photos you can upload for any object, an option for using a manual camera settings, and an unlimited number of models you can export (The free version limits you to 3 a week)
 
  • #26
What is the mesh format, please ? Portable OBJ+MTL, or proprietary, Blender-only ??
 
  • #27
You can export in either OBJ or FBX formats.
In case your wondering, the app I used is KIRI engine.
 
  • Like
Likes Nik_2213
  • #28
Thank you.
 

What is 3D rendering?

3D rendering is the process of creating a 2D image or animation from a 3D model using computer software.

Why does a new computer make a difference in 3D rendering?

A new computer typically has faster and more advanced hardware, such as a powerful graphics card and a faster processor, which can significantly improve the speed and quality of 3D rendering.

How does a new computer improve the speed of 3D rendering?

With a new computer, the processing power and memory are increased, allowing for faster calculations and rendering. This results in quicker render times and less lag during the rendering process.

What are the benefits of using a new computer for 3D rendering?

A new computer can handle more complex and detailed 3D models, resulting in higher quality and more realistic renderings. It can also save time and increase productivity by completing renders faster.

Do I need a specific type of computer for 3D rendering?

While a new computer with advanced hardware will greatly benefit 3D rendering, it is not necessary to have a specific type of computer. As long as the computer has a capable graphics card and processor, it can handle 3D rendering tasks. However, for more complex and high-demand projects, a specifically designed workstation may be more suitable.

Similar threads

  • Science Fiction and Fantasy Media
Replies
2
Views
2K
  • Computing and Technology
Replies
3
Views
1K
  • Art, Music, History, and Linguistics
Replies
7
Views
2K
Replies
16
Views
2K
  • Programming and Computer Science
Replies
1
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
2K
Replies
7
Views
2K
  • STEM Academic Advising
Replies
7
Views
2K
Replies
14
Views
3K
Back
Top