Author Topic: GTX 480 vs GTX 780 Real world Lumion comparison  (Read 4942 times)

Nico RVArq.com

    Reputation: 58
GTX 480 vs GTX 780 Real world Lumion comparison
« on: June 08, 2013, 10:11:46 pm »
June 08, 2013, 10:11:46 pm
I've the pleasure of having updated my GTX 480 to a GTX 780.
I've made some tests in 3 projects (A&B / C&D / E&F) before and after.
Sorry I have not maked screenshots, but sincerely say that Nvidia does not pay me for this topic :P

A Project: 14 or 15 fr/sec vs 20 or 21 fr/sec
GTX480 5.66 sec vs GTX780 2.86 sec

B
GTX480 4.2 sec vs GTX780 2.52 sec

C Project: 3 or 4 fr/sec vs 8 or 9 fr/sec
GTX480 8.6 sec vs GTX780 2.41 sec

D
GTX480 6.06 sec vs GTX780 2.66 sec

E Project: 7 or 8 fr/sec vs 9 or 10 fr/sec
GTX480 13.6 sec vs GTX780 3.67 sec

F
GTX480 14.31 sec vs GTX780 4.39 sec

byzantium1200

    Reputation: 43
Re: GTX 480 vs GTX 780 Real world Lumion comparison
« Reply #1 on: June 09, 2013, 08:49:06 am »
June 09, 2013, 08:49:06 am
Thanks Nico.

It is also interesting that when I render a whole city the render time is 3 secs, but if I get near a single wall to fill the frame, render time increases to 7 secs. Any comments from the staff?

Xtreme-L

    Reputation: 7
Re: GTX 480 vs GTX 780 Real world Lumion comparison
« Reply #2 on: June 09, 2013, 02:27:30 pm »
June 09, 2013, 02:27:30 pm
problems with gtx 780 ???

Re: GTX 480 vs GTX 780 Real world Lumion comparison
« Reply #3 on: June 10, 2013, 11:27:24 am »
June 10, 2013, 11:27:24 am
Thanks Nico.

It is also interesting that when I render a whole city the render time is 3 secs, but if I get near a single wall to fill the frame, render time increases to 7 secs. Any comments from the staff?

Hi byzantium1200

Um  ::) I'll have to check with our dev team on the technicalities of that.

What resolutions are your textures for the walls?

I'm reasonably sure Lumion uses auto mipmaps for it's surfaces, and they are rendered using a range of resolutions, say from 256x256 upwards. 

The further away, the smaller the internal mipmaps resolution of the texture, and the quicker it is able to be read from disk, called, and also passed on and off the GPU. 

As you get closer, the full high resolution texture will finally get used, and if it's large for example higher than say 4096x4096 then depending on GPU performance there will be more noticeable lag in times as the textures gets read in to GPU memory and used for rendering.

In order to accommodate this texture the GPU may also have to swap other textures in and out of its buffers, costing more time.

If there is also lots of textures needed in the scene and they are all separate images, there is a cost to the GPU as it grabs, orders and swaps the textures for use.

byzantium1200

    Reputation: 43
Re: GTX 480 vs GTX 780 Real world Lumion comparison
« Reply #4 on: June 10, 2013, 11:42:10 am »
June 10, 2013, 11:42:10 am
The frame times are for my Babylon city model. I used many 4K textures mostly with normal maps, and several 8K textures. As I don't see any performance drop with the GTX Titan (I get 30 fps with everything turned on), I did not spend much time on optimization. The render time changes inversely (and continuously) with distance, if there were a swap delay shouldn't it get faster after the swaps?

Re: GTX 480 vs GTX 780 Real world Lumion comparison
« Reply #5 on: June 10, 2013, 01:44:55 pm »
June 10, 2013, 01:44:55 pm
Performance is roughly influenced by 4 main factors:

1) Bandwidth
This is the data flowing through the GPU. Bigger textures mean more data is flowing from and to memory. If the memory of the GPU is not fast enough to keep up with the rest this will be the limiting factor. In some rare cases it might even be the amount of memory which is causing a problem. When there is not enough memory available the GPU will have to swap data between system memory and GPU memory causing a major dip in performance.

2) CPU
The CPU issues all the commands to the GPU and handles things like sound, logic and physics. When you have a slow CPU which cannot feed the GPU quickly enough this will be the limiting factor.

3) Vertex count
This is not as much of a problem as it used to be because now vertex processing and pixel processing is done with the same shading units. In general you can say that rendering a huge amount of vertices will reduce the overall speed.

4) Pixel processing
This is usually the problem in Lumion. Shadows, lighting, reflection, colors and special effects are all pixels with complex algorithms attached to them. The more pixels the final image contains the slower you render. You can test this by reducing the number of pixels. You can find this option in the option screen and is called ''Editor Resolution". For final render this of course is the resolution of the output image.

Some of these factors are more or less independent of each other and work like the weakest link in a chain. If you have a really slow CPU for example you will see zero performance increase by using a fast GPU. The same counts for bandwidth. If the memory is really slow the GPU has to wait a lot while processing pixels.

The titan has a better memory speed than the 780 which might explain the difference. You could try reducing texture sizes to see if it helps. By the way: Lumion automatically caps texture size to avoid this problem although you can press a hotkey to force Lumion to load the full 8k texture into the GPU.

Also, the titan has more pixel processing power which might have an effect when you get more surfaces on screen which require shading.

Xtreme-L

    Reputation: 7
Re: GTX 480 vs GTX 780 Real world Lumion comparison
« Reply #6 on: June 10, 2013, 10:53:24 pm »
June 10, 2013, 10:53:24 pm
Nico is comparing GTX 480 vs GTX 780 and for me having a look to the benchmark list is quite a big difference.

Passmark: GeForce GTX 780----7,106 / GeForce GTX 480----4,333
              GeForce GTX TITAN--8,368
http://gpuboss.com/gpus/GeForce-GTX-780-vs-GeForce-GTX-480

Michael

    Reputation: 10
Re: GTX 480 vs GTX 780 Real world Lumion comparison
« Reply #7 on: June 20, 2013, 05:17:25 pm »
June 20, 2013, 05:17:25 pm
How much VRAM do these 2 compared cards come with?