July 18, 2014, 02:15:34 am
Is there any kind of timeline for accelerating Lumion rendering performance? This seems to be a sore spot with developers because it keeps getting asked over and over. I'm just wondering if there's any insight into plans for the future?
Personally, I'm an IT Administrator and I try to equip my users as best I can within financial reason. I have my designer set up with a nVidia GTX780 Ti 3GB but I have 40 other PCs that could be put to great use.
Quad dual GPU systems in high end gaming rigs are starting to become common. If I could build a system like that for my designer and cut his animation render times down from 5-12 hours to 1-2 hours, it'd be AMAZING.
Again, I'm not trying to anger the Lumion developers by asking this yet again. I just think that if you post a timeline or at least pin a thread in your FAQ section specifically about SLI/Crossfire/Multi-GPU/Render Farms, it'd help with the aggravation. Both to the users wondering why it's not implemented yet and for the developers who are sick of answering the question.
Just my $0.02, wouldn't a Render Farm be the easiest to implement? An MP4 can simply be stitched together with a binary copy. Just create a headless service that runs on the farm PCs. Have the render manager dole out the number of frames to be rendered based on the Lumion performance score of each farm PC. Include a wake-on-lan function to wake any sleeping farm PCs up. I could then pick up a bunch of $300 video cards and go to town. Heck, I could even put them in as secondary cards, not hook up a monitor to that card, and the users would never know that they are part of the farm. Well other than the heat and Lumion chewing up a bunch of RAM.
You could even do a similar thing within a multi-GPU system. Just run multiple instances of Lumion. Yeah, it'd require an insane amount of RAM but there's workstation/server mobos that will handle more RAM that I can afford.
Thanks for your time.