RENDER ZONE

http://www.ontmoeting.nl/renderzone/

homepage

 

Interviews Render firms

Iray NVIDIA's render programma, hier met Iray Photoreal. Iray rendert Photorealistische afbeeldingen die Physically-plausible zijn, gebaseerd op de interactie van licht en materialen in de natuurkundige wereld. Een goede benaming.


Brecht van Lommel

June 16 2014

http://www.blenderguru.com/podcasts/brecht-leaves-blender/

ABSTRACT and OVERVIEW OF ORIGINAL ARTICLE, please refer to the link.

 

Over the weekend we got the sad news that Brecht Van Lommel, the primary developer of Cycles, is leaving Blender.

Here's his post from BlenderNation:

"In July I will start a new job at Solid Angle, working on the Arnold renderer. This unfortunately means I will no longer be able to add new features to Cycles, though I'll still be around and happy to assist other developers."

Many important features in past releases have already been implemented by other contributors, and I expect that the improvements will keep coming.

As for other Blender development, I'll continue to be involved as a volunteer.

Many thanks to all contributors and users. I'm especially grateful to Ton for supporting my work on Cycles. I think we have pushed open source production rendering quite a bit further, and I hope to see you all continue doing awesome things with Cycles.

Thanks, Brecht.


Interview Brecht

http://blenderdiplom.com/en/interviews/400-intervie
w-brecht-van-lommel-on-cycles.html

Cycles GPU render software creator Brecht van Lommel to work with Arnold

http://www.cgchannel.com/2014/06/cycles-developer-
brecht-van-lommel-to-work-on-arnold/
 
http://www.blendernation.com/2014/06/14/brecht-van-lom
mel-leaves-blender-institute-will-work-on-arnold-renderer/


Blender Foundation

Perfect conferences about the several issues of Blender and Open Source, organizing and coherence

Ton Roosendaal (Blender developer in Amsterdam)

"Documentation is always a thing for others to do, but it is essential to communicate with other developers and of course users, to get it done and right".

https://www.youtube.com/watch?v=ziPLNUfm7KA

Live stream 25 okt 2013

Brecht talks about current Blender UI status and challenges.

 


Blender lecture.

 

History Blender

Ton Roosendaal, Dutch software developer. Creator of open source Blender and Traces (former Amiga ray tracer). Chairman of Blender Foundation.

http://www.blender.org/foundation/history/

https://en.wikipedia.org/wiki/Ton_Roosendaal

http://www.blenderguru.com/articles/interview-with-ton-roosendaal/

ABSTRACT and OVERVIEW OF ORIGINAL ARTICLE, please refer to the link.

 

with Ton Roosendaal co-founded the Dutch animation studio NeoGeo.
in May 2002 Ton Roosendaal started the non-profit Blender Foundation
July 2009, Ton received an Honorary Doctorate in Technology from the Leeds Metropolitan University, for his outstanding contribution to creative technology.
Ton studied Industrial Design in Eindhoven, before founding the animation studio NeoGeo, where Blender was being developed as the in-house tool. In 2000 he moved to Amsterdam the Netherlands. In 2007 he founded the Blender Institute, where he's still working full time on crowd-funded Blender Open Source and Open Movie projects. Ton is single and he lives in Amsterdam, the Netherlands.
 

Ton looking in the Blender crowd.


Interview with Maxwell Render expert James Coleman

nov. 2013

http://www.archicadmonkey.com/files/interview
-james-coleman-maxwell-render.html

ABSTRACT and OVERVIEW OF ORIGINAL ARTICLE, please refer to the link.

 

How do you see CGI technology developing in the coming years?

I think aspects from the video game industry are going to slowly leak into ours. Video games have a much bigger audience than 3D/CAD and create a huge amount of revenue. Imagine a game engine so good that it could act as a render engine for you, and you could show any angle of an object in any lighting setups instantly. That's the way render engines are going at the moment, game engines are practically already there. And imagine using an Oculus Rift to look around a building before it was constructed...

That being said, because 3D viewing in general still hasn't really been an accepted part of the industry, I think the main improvements will be speed, ease of use and specialism. How renders are displayed in the future will also have a big impact in how they are created. Will they be handed around on tablets or displayed on 85 inch 4K screens?


In between:

http://www.blenderguru.com/articles/20-jaw-dropping-architectural-renders/

Several fantastic Architectural renderings of Blender gurus.


Game programming

Jason Jones profile

http://marathon.bungie.org/story/jasonjonesTofTMPG.html

ABSTRACT and OVERVIEW OF ORIGINAL ARTICLE, please refer to the link.

Q. Now Lighting, do you handle that any special way?

A. This is sort of a general thing. All the lighting in Marathon is done through look-up Tables. We don't do any calculations on the fly. We're not sitting there with a pixel, saying "All right, we need to darken this pixel 15 percent," and do a bunch of Multiplications and divisions. That's true for pretty much everything in Marathon. Anything interesting that happens probably goes through a look-up table.

So we have a Lighting intensity value and we have a pixel color value, and we combine those in the right ways, send them through a table, and we get out a new pixel value. Everything that goes to the screen gets lighted in some way. Even if a pixel's value is 100 percent intensity, it still passes through a lighting table and remains unchanged. On the lowest level, the lighting is done through look-up tables. On a high level, we assign intensities to individual sprites walking around the world, and we can assign intensities to the floors and ceilings of the map. The walls then intelligently pick up the intensities of the ceilings and floors above and below them.

 

Q. For the rendering engine, how many people did you have working on it, and how long did it take?

A. There were three people working on Marathon, and it us a little less than a year. We started in May and were finished in December 1994.

 

Q. Was there one person handling the engine, or did everybody work on it?

A. Two people worked on the texture mapping and the rendering parts of the game. The third programmer primarily handled writing the Mac user interface and some of the less important parts of the actual game itself.


Robert Tiess

Jul 22, 2009

Artist in Blender community

http://www.blenderguru.com/articles/interview-with-robert-tiess/

ABSTRACT and OVERVIEW OF ORIGINAL ARTICLE, please refer to the link.

I first started with Blender in 2003. I had always wanted to further my artist experience in the 3D digital realm after having worked in natural media, mixed media, and 2D digital media for years. The discovery of Blender opened a new literal and figurative dimension, and I have been enjoying the exploration of that dimension ever since!


Twilight of the GPU

an epic interview with Tim Sweeney

Sep 15, 2008

http://arstechnica.com/gaming/2008/09/gpu-sweeney-interview/

ABSTRACT and OVERVIEW OF ORIGINAL ARTICLE, please refer to the link.

 

Tim Sweeney, co-founder of Epic Games and behind the Unreal series 3D game engines.

JS: So you guys are just going to skip these graphics APIs entirely?
(regarding DirectX, OpenGL or CUDA language C++.)

TS: That's my expectation. Graphics APIs only make sense in the case where you have some very limited, fixed-function hardware underneath the covers. It made perfect sense back with the 3Dfx Voodoo and the first NVIDIA cards, and the very first GeForces, but now that you have completely programmable shaders, the idea that you divide your scene up into triangles rendered in a certain order to a large framebuffer using fixed-function rasterizer features is really an anachronism. With all that general hardware underneath, why do you want to render scenes that way when you have more interesting possibilities available?

 

JS: So you guys are just going to use CUDA or whatever?

TS: It could be any general-purpose programming language. But I assume in that case we'll write an algorithm that takes as its input a scene in our own little representation defined by our own data structures, and spits out a framebuffer full of colors, and generate that using any sort of technique.

TS: From my point of view, the ideal software layer is just to have a vectorizing C++ compiler for every architecture NVIDIA, Intel, AMD, whoever. Let us write code in C++ to run on the GPU, including shaders in C++, and rendering algorithms in C++, where programmers explicitly create threads, hand work off to them, synchronize shared data, and so on. And then use what NVIDIA calls "pixel pipelines" and Intel calls "vector registers" by means of a vectorizing compiler that takes loops, unrolls them, and runs it on the wide vector units.

I think you have a fairly well-defined lowest common denominator there that you should be able to fit future hardware into without much pain. And that's vastly preferable to different languages on different platforms. That's just a non-starter—we're supposed to use one programming language for targeting Intel devices and another for targeting NVIDIA devices? It just doesn't work. If that were the case with current consoles, then we would've picked one and stuck with just that console. There's no way we could've maintained multiple engines in multiple programming languages just because console vendors couldn't agree on standard programming languages.

I think that's a pretty obvious step. With CUDA, NVIDIA has taken C and added extensions onto it to specify where certain pieces of code can run; you can certainly add those extensions onto C++ and have a full general C++ language capable of running on NVIDIA or other platforms, with some extended keywords that let you do platform-specific optimizations. That, to me, is the best lowest common denominator, because it's a standard language, it's non-controversial, and even though it's not optimal for a lot of reasons, it's good enough.

JS: So how, then, do hardware vendors differentiate their offerings? If everybody's doing their own thing, and there's a multiplicity of different rendering approaches out there in the wild, and game engines are really different from each other under the hood, then how do hardware vendors differentiate themselves. And we as a hardware site, for instance, how do we benchmark these and review them?

Is it going to be the case that one offering is going to run a particular category of games fairly well, but not as well as the Intel or ATI offering. Maybe one will be good at a specific type of rendering plus physics, but maybe not so good at another class of renderers that ATI excels at.

TS: Well, I think the main distinguishing factor of hardware at that time is just going to be performance.

JS: But performance at what, is what I'm asking.


Antony Ward

Why choose non-photorealistic rendering (NPR) ?

apr. 15, 2014

http://www.creativebloq.com/audiovisual/why-choose-npr-41411359

So after all that time, with all the blood, sweat and tears spilled trying to crack the photorealistic code, why should we then decide to go back and render anything in a non-photorealistic way?

Answer: It's a style thing

It's not all about entertainment, of course. NPR is also widely used in many other industries. Technical renderings of a product couldn't be possible if attempted in a realistic way.


Mark Holes

Pixar

oct. 2009

http://www.thunderchunky.co.uk/ar
ticles/mark-holmes-pixar-graphics-guru/

ABSTRACT and OVERVIEW OF ORIGINAL ARTICLE, please refer to the link.

 

Everything the public sees from Pixar is entirely CG, does the good old pencil and sketchpad still play an important part in the process?

Absolutely! Pencil and paper are critical to our process. Animation is an expensive and time-consuming process, CG animation even more so. It takes dozens of highly trained people, expensive hardware and software, and hours of render time to realize a final film rendered image. What pencil and paper afford, or more to the point, what the Art Department provides is relatively fast, cheap and highly iterative Pre-Visualization of the film before it ever touches the pipeline. This development time allows the director to experiment, find the right choices, and allow the look of the film to evolve. It doesn't matter what tools the artists use to express this vision-as long as it is fast and flexible. Pencil and paper, pastel, paint, charcoal, Photoshop, Illustrator, etc., they are all tools to help the director find the look of the film before it is made, and communicate that vision to all the technical artists who will bring it to the screen in a way that surpasses even our best estimations.


http://www.archicadmonkey.com/files/interview-
james-coleman-maxwell-render.html

nov 2013

James Coleman

expert Maxwell Render

 

ABSTRACT and OVERVIEW OF ORIGINAL ARTICLE, please refer to the link.

I would love to continue the series with Maxwell V3, and I'm always interested in working closer with Next Limit, so fingers crossed I will be doing some V3 tutorials! (It would also be the perfect excuse to upgrade my computer to the latest OS.)

There's always been a lot of competition, but at the moment one of the reasons Maxwell still stands out to me is that it doesn't use any GPU acceleration. For example, I can’t even try Octane because I don't have an NVidia graphics card. Maxwell also scales linearly with CPUs, multi-threads seamlessly, and has a brilliant network rendering system, so you always get the most out of your computer(s). Maxwell's definitely one of the best render engines available for arch vis, in fact it's what I find it’s used for most of the time, and Maxwell scales really well so it can also be used for automotive, product and jewellery renders without any problems. In my experience, users always agree that to get the results comparable to Maxwell with any other render engine, you would be waiting a lot longer.

How do you see CGI technology developing in the coming years?

I think aspects from the video game industry are going to slowly leak into ours. Video games have a much bigger audience than 3D/CAD and create a huge amount of revenue. Imagine a game engine so good that it could act as a render engine for you, and you could show any angle of an object in any lighting setups instantly. That's the way render engines are going at the moment, game engines are practically already there. And imagine using an Oculus Rift to look around a building before it was constructed...

That being said, because 3D viewing in general still hasn't really been an accepted part of the industry, I think the main improvements will be speed, ease of use an specialism. How renders are displayed in the future will also have a big impact in how they are created. Will they be handed around on tablets or displayed on 85 inch 4K screens?


Head render technology explains about GPU for Maxwell Render

Juan Canada,

 

http://blog.digitaltutors.com/dt-exclusive-maxwells
-head-render-technology-explains-gpu-prototype/

ABSTRACT and OVERVIEW OF ORIGINAL ARTICLE, please refer to the link.

Currently, GPU rendering is a fast growing trend in 3D technology because it's reported to provide modelers and animators higher quality view port looks that more closely resemble final renders.

GPUs are very fast at some things, while slower than CPUs at others. There is a lot of hype about GPUs that hides this fact, typically SIMD architectures are very good when all the cores are doing more or less the same thing. In a raytracing context it means that the performance of GPUs is good in simple scenes where rays follow similar paths, but the more complex the scene is in terms of both geometry and light transport, (ex: a scene with a few polygons with glass, mirrors, and small holes could be very complex), the lower the performance of a GPU.


Piyush Patel

Digital tutors founder

Founder of Digital Tutors

http://blog.digitaltutors.com/interview-with-digit
al-tutors-founder-piyush-how-we-began/


Victor Gonzalez, CEO Next Limit Tech. - Maxwell Render

apr. 10, 2007

Link

The actual interview LINK below

http://blog.novedge.com/2007/04/an_interview_wi.html

The key is the rendering algorithm and how all the pieces fit together. Throwing all the ingredients together in one big soup, like other rendering systems like to do, would not be enough. We decided, from the very beginning, to slowly cook a very powerful unbiased render without adding any artificial tricks that could reduce its correctness. The algorithm is able to track efficiently all the light paths, interact with the physical materials and combine them in the optical sensor. This whole process has been developed through the years by very rigorous people that would not restrict such scientific development in favor of a fast trick. The results are definitive, the quality of the render is amazing even for us still today. Keeping the maximum quality is our goal and we will keep on working on improvements without losing this vision.

With posted reactions about not good working Maxwell for SketchUp version 1.0.


Peter Guthrie

3D artist working with V-Ray

http://vray.info/interviews/peterguthrie/

ABSTRACT and OVERVIEW OF ORIGINAL ARTICLE, please refer to the link.

 

Are there any plug-ins or applications that you find indispensable?

SketchUp, 3ds Max, Photoshop and Lightroom are my main apps, then the plugins I use most are V-Ray (of course), V-Ray Scatter and just started using Photolooks for Photoshop which seems very promising.

 

What do you think of V-Ray? How could you compare it to other renderers? Do you use other renderers? What do you like / don't like in V-Ray?

V-Ray is the king in my opinion. There aren't many guys like Vlado Koylazov (co-founder and head of software development at Chaos Group - editor's note) in the world and the work he and the Chaos Group team have done coding V-Ray over the past 15 years has defined a large portion of my life. I'll always owe them one for providing me the opportunity to realise my imagination. But just because I'm a V-Ray fanboy doesn't mean I don't use other renderers. Octane Render is probably my favorite but it's just not quite there yet for full on production use so it's sitting in the backseat constantly annoying me with "Are we there yet?". What I love about V-Ray is the team are always strongly listening to community feedback. Chaos Group aren't afraid to fail and they will implement features they personally don't believe in just to please customers.

At the moment I really hate how long it's taking to have specular layers added in as a feature. On my Mastering V-Ray course there is a huge limitation at the moment that requires using blend materials to create custom BRDF's. Instead of adding only the specular and reflection layers to calculate, we currently have to calculate an entirely different shader and it's very slow this way. Cinema 4D guys already have the feature and universally use it so why is it taking so long to be added to Max. If you're listening Vlado, please answer my prayers!


 

Just a simple question

Why are there no interviews with CEO's of Render companies like Artlantis, Render[in], SU Podium and KeyShot?

Those companies do have an other way of communicating and like to stay anonymous. They have an other hierargy system and culture.

In the Open-Source world and in the fast changing Rendering world with upcoming GPU renderers we see just the opposite. We find a lot interviews about Blender, Cycles, V-Ray to mention just a few. But also on the conventional CPU side we find good information from Maxwell Render Spain. We see those willing to communicate firms at SIGGRAPH each year.

In the Game World and Film industry it is standard practice to tell everything you know about YOUR program. Including a roadway for upcoming features.

See also the communication page below.

 

Communication page

homepage

omhoog