вход по аккаунту


3D Artist - July 2018

код для вставкиСкачать
E 10
Creative Assembly
pro character buildin
Dive into an underwa
Revealed: CG innovations 25 years on
Discover adaptable high-poly assets
Inside one of Europe’s hero vendors
Software ZBrush, Marvelous
Designer, 3ds Max,
Photoshop, Quixel Suite,
Marmoset Toolbag 3
Level Up Your Game Art Page 36
Future PLC Richmond House, 33 Richmond Hill,
Bournemouth, Dorset, BH2 6EZ
Editor Carrie Mok
01202 586247
Art Editor Newton Ribeiro
Staff Writer Brad Thorne
Production Editor Katharine Marsh
Group Editor in Chief Amy Hennessey
Senior Art Editor Will Shum
Adam Barnes, Greg Barta, Orestis Bastounis, Andrew Comb,
Tanya Combrinck, Matthias Develtere, Ian Failes, Scott
Freeman, Harriet Knight, Ben Le Tourneau, Jody Sargent,
Danny Sweeney, TheLaserGirls, Ryan Wells
James Sheppard
All copyrights and trademarks are recognised and respected
Media packs are available on request
Commercial Director Clare Dove
Advertising Manager Mike Pyatt
01225 687538
Account Director Chris Mitchell
01225 687832
3D Artist is available for licensing. Contact the International
department to discuss partnership opportunities
International Licensing Director Matt Ellis
Email enquiries
UK orderline & enquiries 0344 848 2852
Overseas order line and enquiries +44 (0) 344 848 2852
Online orders & enquiries
Head of subscriptions Sharon Todd
Head of Newstrade Tim Mathers
Head of Production Mark Constance
Production Project Manager Clare Scott
Advertising Production Manager Joanne Crosby
Digital Editions Controller Jason Hudson
Production Manager Frances Twentyman
!ǝǣƺǔ…ȵƺȸƏɎǣȇǕ…ǔˡƬƺȸ Aaron Asadi
Commercial Finance Director Dan Jotcham
Global Content Director Paul Newman
Head of Art & Design Greg Whitaker
Printed by William Gibbons & Sons Ltd, 26 Planetary Road,
Willenhall, West Midlands, WV13 3XT
Distributed by Marketforce, 5 Churchill Place, Canary Wharf,
London, E14 5HU Tel: 0203 787 9060
Distributed in Australia by Gordon & Gotch Australia Pty Ltd,
26 Rodborough Road, Frenchs Forest, New South Wales 2086 Tel: + 61 2 9972 8800
ISSN 1759-9636
We are committed to only using magazine paper which is derived from
The paper in this magazine was sourced and produced from sustainable
managed forests, conforming to strict environmental and socioeconomic
standards. The manufacturing paper mill holds full FSC (Forest Stewardship
his issue, we’ve compiled
an incredible 28 pages of
game art to help you level
up your skills. From artists like
Danny Sweeney, character artist
at Creative Assembly, to
Matthias Develtere, a 3D artist
at MachineGames, you can
expect to hear triple-A
techniques and learn step by step how they achieve
their stunning assets.
You can also hear from Mak Malovic from
Bluepoint Games, who are the masters of the
remaster. Malovic talks about reworking the beloved
classic Shadow of the Colossus from page 36, or learn
35 tips to up your real-time game on page 44. Jody
Sargent also gives us her top advice on creating a
beautiful underwater scene in UE4 on page 72.
As the next Jurassic World hits the big screen,
we’ve ventured back into Jurassic Park with Ian Failes,
who talks to some of the artists who created the
ground-breaking effects, on the 25th anniversary of
the film’s release.
We’ve also reviewed LightWave, as the renderer
returns for the first time in four years. What lies in
store in the new version? Andrew Comb has the
answer over on page 82.
Enjoy the issue.
If you submit material to us, you warrant that you own the material and/
or have the necessary rights/permissions to supply the material and
you automatically grant Future and its licensees a licence to publish
your submission in whole or in part in any/all issues and/or editions
of publications, in any format published worldwide and on associated
websites, social media channels and associated products. Any material you
submit is sent at your own risk and, although every care is taken, neither
Future nor its employees, agents, subcontractors or licensees shall be liable
for loss or damage. We assume all unsolicited material is for publication
unless otherwise stated, and reserve the right to edit, amend, adapt all
Carrie Mok, Editor
Sign up, share your art and chat to other artists at
Get in touch...
All contents © 2018 Future Publishing Limited or published under licence.
All rights reserved. No part of this magazine may be used, stored,
transmitted or reproduced in any way without the prior written permission
of the publisher. Future Publishing Limited (company number 2008885)
Ambury, Bath BA1 1UA. All information contained in this publication is for
information only and is, as far as we are aware, correct at the time of going
to press. Future cannot accept any responsibility for errors or inaccuracies
in such information. You are advised to contact manufacturers and retailers
directly with regard to the price of products/services referred to in this
publication. Apps and websites mentioned in this publication are not under
our control. We are not responsible for their contents or any other changes
any way with the companies mentioned herein.
Future plc is a public
company quoted on the
London Stock Exchange
(symbol: FUTR)
Chief executive Zillah Byng-Thorne
zȒȇ‫ٮ‬ƺɴƺƬɖɎǣɮƺƬǝƏǣȸȅƏȇ Richard Huntingford
!ǝǣƺǔˡȇƏȇƬǣƏǼȒǔˡƬƺȸ Penny Ladkin-Brand
Tel +44 (0)1225 442 244
This issue’s team of pro artists…
Danny Sweeney is a character artist at
Creative Assembly, and he’s kindly
gone through the triple-A techniques it
takes to create a Total War: Warhammer
II character on page 50.
3DArtist username DannSw
Matthias is a 3D artist at
MachineGames and was responsible
for almost all the vehicles in Wolfenstein
II. Who better to walk us through a
high-poly asset on page 58.
3DArtist username DevMatt
Adam took a trip to Germany’s this
month to speak to one of Berlin’s
biggest vendors, Rise FX. Find out
about the studio’s beginnings over on
page 30.
3DArtist username N/A
Greg turns his attention to lighting this
issue, after his successful journeys in
scientific weather phenomena. He
gives us top tips to upgrade your
lighting on page 66.
3DArtist username N/A
Life always finds a way and back in
1993, Jurassic Park certainly did. Ian
spoke to the artists behind the
game-changing effects on the film’s
25th anniversary on page 22.
3DArtist username N/A
Jody is a senior environment artist with
ten years of games industry experience.
She has kindly revealed how she
accomplished her amazing underwater
scene in Unreal Engine 4 on page 72.
3DArtist username JodySargent
Sarah and Dhemerae make up
TheLaserGirls – 3D-printing educators
evangelists and enthusiasts. This issue,
they’ve given us top tips for printing our
first prop sword on page 74.
3DArtist username TheLaserGirls
Tanya has spoken to some outstanding
artists working in the industry today to
give us real-time tips from Unreal
Engine to Unity and Amazon
Lumberyard. It’s on page 44.
3DArtist username N/A
Orestis has taken the mid-range
Chillblast Fusion Render OC Lite
P4000 Professional 3D Editing
Workstation out for a spin this issue.
Read his review on page 80.
3DArtist username N/A
What’s in the magazi
It’s important to
research what
type of clothing
and armour the
group wears
News, reviews
& features
12 The Gallery
A hand-picked collection of phenomenal
and inspirational artwork
Danny Sweeney gives us
triple-A tips Page 50
22 VFX Finds A Way
Ian Failes goes back in time to 1993 to
discover all the ground-breaking CG
innovations of Jurassic Park
28 Subscribe Today!
Save money and never miss an issue
30 Reaching New Heights: Rise FX
Adam Barnes goes behind the scenes at
one of the biggest vendors in Berlin
36 Game Art Made Easy
Learn from triple-A artists on the best
ways to up your skills
44 35 Tips To Supercharge
Real-Time Renders
Master Unreal Engine 4, Unity, CryEngine
and more!
78 Technique Focus:
06:00 am
Bohdan Kryvetskyy on the inspiration
behind his piece
80 Review: Chillblast Fusion Render
OC Lite
Orestis Bastounis gets to grips with this
Chillblast machine
82 Review: LightWave 2018
Andrew Comb takes a look at NewTek’s
first release in four years
Model an adaptable
high-poly asset
98 Technique Focus:
Chappie Bust Fan Art
John Olofinskiy tells us how he challenges
himself with his art
Game Art
Made Easy
Save up to 20%
Turn to
page 28
for detaiils
New Heights:
Rise FX
Create an underwater
scene in UE4
The Pipeline
50 Step By Step: Create a Dark Elf
from Warhammer
Creative Assembly’s Danny
Sweeney talks triple-A characters
58 Step By Step: Model an
adaptable high-poly asset
Matthias Develtere on creating
vehicles for in-game and cinematic
66 Step By Step: Master lighting
and materials
Greg Barta returns with an essential
look at how to improve your lighting
I figured if I got a bone
test up, just to prove it could
be done, then something
might come of it
Steve ‘Spaz’ Williams
on how a bit of rebellion
changed Jurassic Park forever
Page 22
Tips To
Visit the 3D Artist online shop at
for back issues, books and merchandise
72 Pipeline Techniques: Create an
underwater scene in UE4
Jody Sargent helps us boost our
Unreal environments
74 Pipeline Techniques:
Design your first
prop sword
TheLaserGirls takes us into the
world of 3D printing
The Hub
86 Community News
We’ve rounded up the launch of our
debut event, Vertex!
88 Industry News
Locksmith opens Soho studio
90 Opinion
Ben Le Tourneau &
Scott Freeman
The Operators Creative cofounders
on crafting digital assets for mobile
92 Project Focus
VW – Born Confident
MPC’s Fabian Frank on creating
marvellous creatures
94 Industry Insider
Johannes Richter
Framestore’s FX lead on Guardians
of the Galaxy Vol. 2’s opening scene
96 Readers’ gallery
The very best images of the month
from our online community
Free with
your magazine
SideFX Houdini
3 Premium
Foundations eBook CGAxis models
25 Textures from
Plus: all of this
is yours too…
Courtesy of SideFX, learn
procedural workflows
Download plenty of highquality textures
Houdini and Clarisse tutorial
A collection of images
to use in your work
printing tutorial
making-of video
special guides to accompany the rest
of our unmissable tutorials
Log in to
Register to get instant access
to this pack of must-have
creative resources, how-to
videos and tutorial assets
for digital
readers, too!
Read on your tablet,
download on your
The home of great
downloads – exclusive to
your favourite magazines
from Future!
Secure and safe online
access from anywhere
Free access for every
reader, print and digital
Download only the files
you want, when you want
All your gifts, from all your
issues, in one place
Get started
know about accessing
your FileSilo account
Follow the instructions
on screen to create an
account with our secure FileSilo
system. Log in and unlock the
issue by answering a simple
question about the magazine.
gifts from more than 40 issues
Access our entire library of resources with a money-saving
You can access FileSilo
on any computer, tablet
or smartphone device using any
popular browser. However, we
recommend that you use a
computer to download content,
as you may not be able to
download files to other devices.
Over 50 hours
of video guides
More than
1,000 textures
Hundreds of
3D models
The very best
walkthroughs around
Brought to you by
quality vendors
Vehicles, foliage,
furniture… it's all there
Head to page 28 to subscribe now
If you have any
problems with
accessing content on FileSilo,
take a look at the FAQs online
or email our team at the
address below
Already a print subscriber?
8QORFN WKH HQWLUH 3D Artist FileSilo library with your unique
above your address details on the mailing label of your
subscription copies. It can also be found on any renewal letters.
off reas
o subsc
Have an image you feel passionate about? Get your artwork featured in these pages
Create your gallery today at
Pablo Muñoz Gómez
Pablo runs
and is a concept and character
artist based in Australia
Software ZBrush, Substance
Painter, Photoshop, Marmoset
Toolbag 3
Work in progress…
I wanted to create a real-time
3D scene that felt like an illustration.
I love macro photography and the
atmospheric beauty associated
with a shallow depth of field, so I
tried to incorporate those elements
into this fantasy scene
Pablo Muñoz Gómez, The Moss Collector, 2018
I created this for the Texturing
and Shading 2 class at Gnomon.
I wanted to challenge myself by
taking on a full character in an
environment. Since I’m a huge
admirer of weapons and
equipment from the Great War,
I designed a character from that
time. I also wanted a wide
variety of objects and materials
to texture and shade
Alec Hunstad,
German Trench Raider, 2018
Alec Hunstad
Alec is an aspiring character/
creature modeller
Software Maya, ZBrush,
Substance Painter, Marvelous
Designer, V-Ray, Substance
B2M, CrazyBump, Photoshop
Work in progress…
Stephanie Wang
This bedroom was my
midterm for my Intro to
Maya class at Gnomon. The
assignment was to re-create
a photo using the tools that
were being taught. It was my
first time modelling, lighting
and texturing a scene, so I
definitely learned a lot
Stephanie is a student at
Gnomon School of Visual
Effects who is striving to
become an environment artist
Software Maya, Photoshop
and V-Ray
Work in progress…
Stephanie Wang,
Swedish Bedroom, 2017
As a 3D generalist, I started
to explore Yeti so that I could
use it in a production for the
company I’m working for. This
image was a small personal
project to make myself more
familiar with the tool
and to be more efficient
in production
Jafar Dashtinejad,
Foxy, 2018
Jafar Dashtinejad
Jafar is a 3D generalist with
over ten years of experience on
a variety of projects from
commercials to feature films
Software Photoshop, Maya,
ZBrush, Houdini, Yeti, Nuke
Work in progress…
Midge Sinnaeve
Midge is a freelance 3D artist
and nerd in love with all things
CG. From VFX and motion
design to visualisation, he’s
happy to tackle any digital
challenge thrown his way
Software Blender
Work in progress…
I was experimenting for a client project
and the idea took a different turn. I like
seeing how far I can push complex
patterns in Blender and the shape evolved
into something resembling a digital iris,
which in turn led to the final look
Midge Sinnaeve
Sinnaeve, Eye of the Storm
Storm, 2018
In depth
It’s an autobiographic
image that shows a lifechanging moment with a
rare disease I had in 2016.
I was in hospital when my
heart stopped beating.
My eyesight faded away
until passing out and the
ECG showed a flat line.
After that day, to prevent
any chance of sudden
death, I had a pacemaker
Christopher Ruesing
As a photographer, Christopher
understands the use of light and
composition in imagery, which
he uses in 3D projects
Software Cinema 4D R18,
Photoshop CS6
Work in progress…
Christopher Ruesing,
Hospital, 2017
The room should be flooded by
light that is diffused by curtains. To
achieve this, I used the backlight
shader in the Luminance channel
of Cinema 4D’s Material-Editor.
To merge photography with my
3D scene, I took pictures of my
arms and legs with a wide-angle
camera lense for composing in
Adobe Photoshop. These are
actually the only parts of the
image that are not rendered.
For realistic wrinkles and shapes in my blanket, I used the Soft Body Tag
in Cinema 4D and built a rough model of my legs as the collider object.
I rendered an alpha mask with an object ID channel in the Cinema 4D multipass
manager. I used another multipass layer for specular lights to highlight the material.
25 years after the release of Jurassic Park, Ian Failes looks back
on the film’s game-changing CG achievements
Grant (Sam Neill)
and Malcolm (Jeff
Goldblum) observe
the T. rex from
their Jeep
sk any 3D or visual effects artist which
movie inspired them to get into the VFX
industry and the most common answer
is undoubtedly Jurassic Park. But the 1993 Steven
Spielberg film was surprisingly low on VFX shots
– in fact, there were just 63 – and it almost didn’t
feature CG dinosaurs at all.
The director had actually initially planned on a
combination of full-scale animatronics from
creature effects designer Stan Winston and
‘go-motion’ dinosaurs created by Phil Tippett’s
Tippett Studio. Industrial Light & Magic (ILM)
was then going to add further motion blur to
these hand-crafted, stop-motion dinosaurs and
composite them into live action plates.
That plan famously changed when ILM
produced a secret full-motion CG test that
stunned Spielberg and changed the visual effects
All images copyright © 1993 Universal Pictures
industry forever. Ultimately, Jurassic Park would
become one of the first major films to have
photoreal living, breathing CG creatures, care of
technology that ILM (overseen by visual effects
supervisor Dennis Muren) had to mostly invent
along the way. On the film’s 25th anniversary
and with the imminent release of Jurassic World:
Fallen Kingdom, 3D Artist highlights just a few of
these tech breakthroughs.
Prior to Jurassic Park, ILM had already been
revolutionising the visual effects industry for a
while. This was via several key CG moments
crafted for films such as 1989’s The Abyss and
Terminator 2: Judgment Day in 1991, after the
studio had spent years advancing the art in
practical and optical effects.
Still, in the early 1990s its computer graphics
department remained small – and somewhat
rebellious. So much so that when told they
would not be tackling digital dinosaurs for
Jurassic Park, ILM CG animator Steve ‘Spaz’
Williams surreptitiously began building and
animating one anyway on his SGI workstation
with the help of some of his colleagues.
“Mark Dippé [associate visual effects
supervisor] and I found a scan of a T. rex and I
began building the bone structure,” says
Williams. “I thought, well, we’ve done chrome.
We had done water. But we had never done
anything living and breathing. I figured if I got a
bone test up, just to prove it could be done, then
something might come of it. Then one Monday
morning I purposely had it playing on a monitor
when the producers Frank Marshall and
Kathleen Kennedy came walking into ILM. They
had no idea it was going to be on the monitor –
and it was – and then everything changed!”
The bones test animated by Williams was
exactly that – just bones. These were modelled
and animated in Alias using bicubic B-spline
patches. Later, the animation for Jurassic Park
would be done in Softimage using inverse
kinematics rather than the forward kinematics
approach used in Terminator 2.
To further prove that a living, breathing
creature could inhabit the screen, another test
with a fully textured T. rex was required. This
was then carried out, and it made use of running
Gallimimus and a fully-textured Tyrannosaur,
convincing the filmmakers that ILM’s CG was
the way to go to make the ‘full-motion’ dinosaurs
for the film.
The raptors terrorise
the kids in the
kitchen sequence.
Storyboards and
animatics formed
the basis of ILM’s CG
dinosaurs here
ILM visual effects supervisor
Dennis Muren (right) and fellow
artists examine a shot from
Jurassic Park
Before choosing the CG route over stop-motion,
Spielberg had already commissioned Phil
Tippett’s team to build several dinosaur puppets
and armatures as part of their planned
go-motion contributions. The director felt that
Tippett, who had managed to acquire significant
knowledge of prehistoric reptile movement and
behaviour, needed to stay on as a so-called
Members of Tippett
Studio, including Phil
Tippett (left), work
with the DID
interface engineer
Craig Hayes with
the DID
‘dinosaur supervisor’. In addition, Tippett Studio
and ILM combined on a project that would
somewhat bridge the stop-motion and CG
worlds with the creation of the Dinosaur Input
Device, or DID.
With four physical skeleton armatures made
in the shape of dinosaurs, the DID was one of
these that was then covered in sensors and
linked to a computer workstation. Movement of
the armature, like that employed by stop-motion
animators on a puppet, resulted in the
corresponding movements of a CG dinosaur
model. It was designed for traditional animators,
many of whom were then unfamiliar with CG
animation software, to continue to apply their
craft on the film.
“The trick was that for every joint on that
armature – which there were for as many axes of
movement on a stop-motion armature as there
would be on a real animal or person – we had to
figure out how to actually encode that,” recalls
Craig Hayes, credited as computer interface
engineer, who was then at Tippett Studio and
helped pull together a team to build the DID. In
1997, Hayes and ILM’s Brian Knep and Tom
Williams, along with Pixar’s Rick Sayre, would
receive an Academy Award for Technical
Achievement for the DID.
ILM visual effects art director
TyRuben Ellingson created this
storyboard for the second CG
test, which featured running
Gallimimus and a T. rex
On Jurassic Park, the
dinosaurs were much
more complicated and
socking would tend to just
stretch the textures
ILM CG animator Steve ‘Spaz’ Williams, who
had helped devise the secret dinosaur test,
created this ‘chained’ T. rex sketch to establish
how the joints of the CG creature in Softimage
would operate. The drawing originated from a
scanned five-foot Stan Winston Studio model
All images copyright © 1993 Universal Pictures
Wireframe view of ILM’s CG
One problem that ILM quickly ran into was how
to ‘connect’ the skin of the dinosaurs to the
underlying motion of the CG skeleton. For
Terminator 2, ILM had smoothly connected two
neighbouring B-spline patches – say the upper
and lower arm of the T-1000 – to provide for a
continuing surface of the human-like chrome
figure via a bespoke process called ‘socking’. On
Jurassic Park, however, the dinosaurs were much
more complicated and socking would tend to
just stretch the textures unacceptably. That gave
birth to ILM’s ‘enveloping’ solution.
“Socking was still used to keep the patches
together,” explains Stefen Fangmeier, a computer
graphics supervisor on the film, “but then
enveloping would adjust the shape of each patch
at the joining point to distribute the deformation
over a much larger area of each patch. Next,
automated shape animation was added,
meaning that if the lower arm would rotate
relative to the upper arm at the elbow joint, a
bicep flexing shape would automatically be
generated for the upper arm.”
All images copyright © 1993 Universal Pictures
“All of these sims were run outside of the
animation packages frame by frame, and the
animated position of the dinosaur would be
saved out,” adds Fangmeier. “Then, via C-shell
scripts on command lines, socking, enveloping
and shape deformation would be executed that
then resulted in a frozen posture of the creature
which could then be rendered. Two of these
frozen postures had to be generated for each
frame at neighbouring frame counts in order for
motion-blurring to be accurate.”
Like its other software solutions, ILM also had to
invent a way to deal with dinosaur textures. At
this stage, of course, artists at ILM did not have
the benefit of a 3D texture painting program
such as MARI or Substance Painter. Very
traditional methods of painting on 2D patches
and then ‘wrapping’ those back up – hoping the
edges matched – had been relied upon up to that
point. However, computer graphics artist John
Schlag had developed a 3D texturing solution for
some CG skin requirements on Death Becomes
Her. It was called Viewpaint.
“The idea of Viewpaint was that it would let
you spin an object around in three dimensions
and paint on it,” says Schlag. “It was like you
were painting on a sheet of glass over a 3D
render of your object, and then it slurps the paint
off the glass and into the textures that make up
the object. But it does that in the texture space
of that object. You don’t have to worry about
unfolding things by hand. And then you’re
guaranteed to get the seams right.”
Schlag adapted current pieces of software to
implement the then novel approach to 3D
texture painting, with Viewpaint continuing to be
developed by several subsequent ILM
employees and becoming a mainstay of the
studio’s pipeline for many years (artists using
the tool even became known as ‘Viewpainters’).
Schlag, Zoran Kacic-Alesic, Brian Knep and Tom
Williams received an Academy Scientific and
Engineering Award for Viewpaint in 1997.
ILM’s CG T. rex. The studio’s enveloping technique
and Viewpaint 3D texturing software were just
some of the tools that helped make the final
dinosaurs look photoreal, and match Stan
Winston’s full-scale animatronic versions
Storyboards, animatics, the Dinosaur Input Device and final animation helped make the iconic paddock
attack scene in Jurassic Park possible
In the scene, the T. rex is able to break through
the non-electrified fence where Jeeps carrying
the characters have stalled. This storyboard
was one of many created to flesh out the
paddock attack sequence and work out which
shots could be achieved with animatronic
dinosaurs or CG imagery.
Having already invested significant research
into dinosaur behaviour, Tippett Studio
produced stop-motion animatics for key
scenes, including the paddock attack. Here,
senior animator Randal M Dutra animates a
Jeep and T. rex puppet. Additional 3D
animatics were also carried out.
Dutra operates the DID for the Jeep attack
shots. Sensors attached to the DID armature
fed back to a computer model of the T. rex and
enabled stop-motion animators, who at that
point had not had significant training in
computer animation, to contribute their
expertise to the film.
For final shots in the paddock sequence, ILM
relied on data from the DID and a wealth of
keyframe animated performance. In addition,
the studio had to incorporate its CG T. rex into
a rainy environment, necessitating the writing
of specialised RenderMan shaders and digitally
compositing in extra marks and splashes.
ILM’s innovations on Jurassic Park extended to
many other areas, including CG stunt doubles,
shader writing for RenderMan, digital
compositing, and plenty of invisible effects such
as face replacement and wire and rig removals.
Perhaps one of the studio’s biggest general
achievements was simply being able to create
CG dinosaurs that could intercut side-by-side
with practical ones.
The film ultimately won the Best Visual
Effects Oscar for this combination of practical
and digital visual effects, awarded to Dennis
Muren, Stan Winston, Phil Tippett and Michael
Lantieri. And that pioneering work continued
through subsequent Jurassic films and other ILM
projects, helping to shape the visual effects
industry we know today.
Subscription offer
Subscribe today and receive
this inspirational book, for free!
Never miss an issue
Delivered to your home
Get the biggest savings
direct to your doorstep
“@3DArtist We love using #blender &
#cycles in our studio! Looking forward
to the new #3DArtistMagazine.”
@Vibe_Studio via Twitter
“Just arrived! The print version is so
get home to give it a good read.”
@pablander via Twitter
“Really having a good time with
issue 90 of @3DArtist magazine.”
@cochesaurus via Twitter
Subscription offer
Subscribe and save 20%
One year subscription
direct to your doorstep
Europe ¤
Rest of the world Amex
Expiry date
Sort Code
Account no
Future Publishing Ltd
Your information
3D Artist Subscriptions, )XWXUH 3XEOLVKLQJ /WG
&DOO 0344 848 2852
10 June 2018
We’re [at] almost
200 people over all
the different offices
Florian Gell ger,
er at Rise F
It was in these three
small glass cubes where
Rise FX first opened its
doors – now it dominates
this entire building
erlin is one o rope’s most culturally diverse and artistica y
ich capitals, but when it comes to 3D design there are few
ompanies as significant as Rise FX as dam Barnes discovers
most 200 people over all
the different offices,” says Florian
Gellinger, one of the four founders
se F based in Berlin, Germany. “Berlin alone
has roughly 1,400 square metres now, so almost
e times what we started with.” The Berlin
udio is a huge
ltilev nvironment with
exposed cream bricks and black painted walls
at contrast one anothe
We’re guided through the ground floor,
weaving through countless desks of busy artists,
own the on-site screening room and told of
Marvel’s involvement had meant this once
open space had to be retrofitted o allow for
more privacy, more secrecy. Gellinger leads us
upstairs, through a much more open hall and
into the largest meeting room of the building,
ere we’re greeted by his office dog and a
glorious sight of the river Spree.
t as significant a studio as Rise FX is, it
n’t feel like – there’s a sense that the
company still retains the very same essence that
was born with, of little more than ju
iends making art together. Only moments
earlier, Gellinger had been inspecting a deliver
for the company, laughing with colleagues
dn’t seem to recognise him as a boss.
“We started actually on the same floor th
we’re at right now,” he explains, “which was
ound 300 square metres. We had everything
her – we had a little screen room, it had o
er room in there and the ACs on the roof.
d so all in all there was probably, if you
subtract the conference from the management
and producing floor, we probably had space for
artists, maybe 20.”
It’s hard to get a sense of the stature that
ought to come with a place like Rise FX, as
though staying in the same place for a decade
has allowed it to hold onto its origin like a
weathered oak tree’s roots that continue to
reach further into the ground but never really
changes within its own space.
“We’ve been incredibly fortunate the way that
we just bumped into the right people at the right
time,” says Gellinger of Rise FX’s growth over
the years. “We were planning to do only smaller
German theatrical releases, arthouse films or
maybe some television work with the people
that we had,” he recalls of the early days of the
studio. And that’s how it began, though the
company “always wanted to focus on visual
effects”. The four founders – Sven Pannicke,
Robert Pinnow, Markus Degen and, of course,
Gellinger – each shared a passion for film and
wanted to work within the industry to help
directors realise their visions.
“We didn’t want to
do advertising,” adds
Gellinger, “we didn’t
want to do industry
promotional films. We
always knew that we
wanted to do
storytelling through
images and then help
directors and their
camera crew to create
something that they just can’t shoot for real.”
As for the artistry of it, Gellinger explains that
it’s “way more rewarding” than having to just
create a product solely to keep the lights on.
“After five years [of such work] you can’t even
remember what you worked on,” he adds,
pointing out that the studio had no interest in
beauty work for slow motion hair in a shampoo
commercial or making the best of a laundry
detergent packshot. “We always wanted
something where you put your heart and soul
into something over weeks and weeks.”
It would be a strange turn of events that
would lead to the studio making its mark,
however. Beginning in 2007 and working on
primarily German-language film and television
releases, it was Apple that would inadvertently
lead to the company’s recognition.
“Apple had just discontinued Shake as a
compositing tool and everybody was kind of
looking for the next thing to work with for digital
image manipulation, and there were only really
two options left. One was Nuke and the other
one was Digital Fusion. We decided to go with
Nuke and we actually had a lot of support from
the software developers in London at the time.
They helped us with self-promoting us through
their channels.”
This led to a meeting with a German arthouse
cinema director who was a fan of visual effects
in films but understood that he couldn’t really
afford such things on his limited budget. All the
same, Rise FX worked with him, enjoying the
difficult challenge of having to create a
realistic-looking 16-second car crash that
involved the main actor with only extremely
limited resources. “So we needed to use the
actor and we needed a real crash, but how do
we combine the two?”
The result was a string of inventive physical
shots from various angles as well as stitching in
background plates and moving panoramas in
addition to digitally adding elements to a
full-throttle, real-life car crash. That in itself was
novel, but using Nuke to composite it all together
creatively was very new at the time.
“So when the software manufacturer showed
that at IBC in Amsterdam we got a lot of
attention, way more than you would have
expected at the time for a company of our size.
We were pretty much just doing stuff with the
new compositing tool that nobody was
accustomed to in a different way than everyone
else was and it kind of
got us attention.”
This attention
ultimately led to work
coming in from
directors outside of
Germany, beginning
with the Wachowski
siblings on Ninja
Assassin and growing
internationally from
there. It’s almost emblematic of the studio as a
whole that such a seemingly small part of the
industry – a 16-second car crash for an arthouse
movie – would go on to have such an impact, a
heritage that has stayed with them. “Even last
year,” says Gellinger, “when we were working on
Fate of the Furious for Universal and I was seeing
the supervisors in Los Angeles, both supervisors
told me that they had seen this car crash years
ago. They had even stolen a couple of our ideas
for the Fast and the Furious.”
It was the team’s work on Harry Potter and the
Deathly Hallows in 2010 that shot them into the
limelight, however, and Rise FX soon moved on
to X-Men: First Class and Captain America, The
Man from U.N.C.L.E. and The Dark Tower. Now
much of Rise FX’s work comes from Marvel, but
if there’s filming in Berlin then the odds are high
that Rise has worked with them, including recent
Netflix hits like Sense8.
There are drawbacks to the size of such a
studio, though, even if those negatives don’t
quite outweigh the positives. “We’re just not
flexible anymore,” laments Gellinger. “We’ve
sacrificed a lot of it through our size.” Since his
role now is mostly in finding the work and
ensuring the right work is allotted to the right
people at the right time, it’s a facet that he
seems to regret losing.
“But of course it also gives us different
abilities now – like doing character work or huge
destruction scenes or effects design. It’s
Rise FX’s credits include as wide
a range as Marvel’s superhero
movies to Gore Verbinski’s 2016
sci-fi horror A Cure For Wellness
We always knew
that we wanted to do
storytelling through
images and then
help directors
The company has grown to
employ 200 people across its
various studios, with its Berlin
office being the biggest of them all
The digi-double work
from Rise is likely to be a
mainstay for the company
for the foreseeable future
Marvel has worked with
Rise FX ever since X-Men:
First Class, and on pretty
much every major Marvel
cinematic release since
Rise FX has witnessed a decade of change
in what is being deemed Berlin’s Soho
Rise FX has been located in the same
complex for the entirety of its decade-long
life, growing and expanding but never
moving. “We’ve been lucky with our
landlord,” says Gellinger, “I think he’s
interested in seeing what we produce.” But
even the building itself has had to change
with the company, since Marvel’s regular
involvement with the studio facilitated
necessary alterations for greater privacy.
“We actually had some Marvel [Studios]
members work with us here in the studio,
and that required us to change the open
space, to build extra rooms.”
But it isn’t only the building itself that has
grown with Rise. As anyone who has ever
lived in one of the large capitals of the world
will attest, ten years can change a lot about
an area. Rise’s office complex is a little
sprawling and it’s tricky to find the
entrance. When we visited, there was even
a large-scale construction project next
door; it’d be easy for anyone to spot that
this is an area on the up. “Since we’ve been
here, the area has kind of grown and grown.
When we first moved here there wasn’t
much here, but over the years we’ve seen
more and more studios appearing,” says
Gellinger. “It’s like Berlin’s Soho.”
The walk from the metro is an eclectic
mix of bars and a plethora of cuisines from
around the world. The famous Lido bar has
been bringing people to the area for years
but now almost by accident (or perhaps
because of Rise FX’s popularity), it has
become a haven for 3D and VFX studios.
“We know everyone from other studios
– we often meet them in bars or have a
drink with them,” says Gellinger. “We have
even begun arranging meet-ups,” he adds,
creating a sense of local camaraderie
between the various studios and artists.
It’d be unlikely that Rise FX’s success and
popularity from international sources was
the key and only reason for Kreuzberg’s
current gentrification, but with so much
creativity being injected into the area from
a number of sources, it’s looking like it
could well become one of the 3D design
centres of Europe.
One recent example of the
team’s ability to create
believable environments is in
Netflix’s The Dark Tower
With such an emphasis on
VFX in Black Panther – and
its resulting praise and
popularity – Rise FX’s
recognition has only
continued to grow
When done well VFX is invisible, but
that’s not always a good thing…
I like working on some historic stuff from time to
time because it’s a good change to just be involved
in all these different styles and creative briefs
something that you just can’t do if you if you’re
trying to keep it simple and trying to solve
everything with a brilliant idea, you can’t get a
chunk of 200, 300 shots done.”
With that said, it’s clear – through speaking
with Gellinger – that Rise FX is a studio that’s all
about innovating, about being creative with its
work and always looking for opportunities to
challenge these ideals. He remains cautious as
he speaks about the current project that the
team is working on, unable to say too much, but
he does explain that there are always things for
the studio to consider as technologies change
and improve that are, in turn, guiding the work
that it undertakes.
“What we’ve been doing over the last couple
of years, for over five years now actually, is that
we’ve been rendering all of our images using
Houdini,” explains Gellinger, adding that it’s had
beneficial effects on the company’s workflow,
which has led to Rise being able to branch out
into further fields.
“Now we’re starting to work on animated
feature films,” he discloses. “We delivered 300
shots for this show that was split between
Norway, Belgium and so on last year and building
on that experience, we’re now doing a little more
than half of Dragon Rider, I think, that we’re
coproducing with Constantin Cinema.”
The benefit of this is that as Rise FX develops
its animation tools within its visual effects
pipeline, the various teams can “take advantage
of the animation tools that we’re using for
animated features within our pipeline”. The two
distinctions may well be completely different
from one another, but for Rise FX they essentially
complement and empower one another. And in a
way it regains some of that flexibility that
Gellinger misses, allowing them to pick for a
larger variety when it comes to projects.
“I like working on some historic stuff from time
to time because it’s a good change to just be
involved in all these different styles and creative
briefs. It’s like, every couple of months you go to
work and you work on something that’s kind of
refreshing because it’s not the same all over
again. That’s kind of what I enjoy, just getting
more variety in work and always thinking of ways
to solve it with what we have and where we
might improve.”
Visual effects are in a strange position. They
are valued for their abilities to enhance a
film’s vision but they are also criticised when
used. Despite the talent of the studios
working hard to implement something that
is basically undetectable, there’s still an
underlying stigma.
“It’s really a problem with everything that
we’re fighting for,” says Gellinger. “There is
something like two-thirds of all the artists
involved in a film who aren’t in the credits. I
mean, film directors are really promoting
something now when they’ve built it for real
instead of doing it in visual effects because it
became something special.”
The problem is as much the directors and
producers, who are constantly disparaging
the use of VFX despite their regular and
common use. “Christopher Nolan, for
example, claims that there were [very little
VFX] in Dunkirk,” says Gellinger. “There
might not have been a single green screen
but it just means that people have been
rotoscoping actors instead of pulling the
chroma key.” On the other hand, Gellinger
doesn’t think that we should be overusing
visual effects, “What I totally understand,
and am also a big proponent of, is: don’t use
visual effects [when] you could have gotten
it in camera. I totally understand that
concept, as that would be a misuse of
computer graphics.”
Gellinger’s philosophy is to use VFX only
where it matters, and perhaps that is where
this stigma has come from. “It’s a problem
that you can only solve through good
publicity,” says Gellinger, “and if you’re not
talking to anyone and you’re keeping it all to
yourself and you’re not doing great making
of or before and after breakdown videos that
are promoted through your social media or
other internet channels, then it’s going to be
hard to make people understand what we’re
actually doing.”
Learn how to take your game art skills to the next
level with the insight of triple-A industry artists
ame art is an attractive prospect for
many 3D artists as it provides them
with the opportunity to help realise the
visual elements that can make a video game
iconic. It’s an endlessly diverse field,
encompassing absolutely everything from
vehicles to vegetation. However, one thing that
remains consistent among them all is that
every individual asset is the work of dedicated
artists, constantly improving their craft.
Here, five such artists shine a light on their
contributions to the art of video games, from
remastering beloved classics to creating
memorable boss characters. They explain their
go-to software, break down their workflows
and tell us how they went from playing video
games to making them. From hard surface
modelling to texture baking, environments and
weaponry, this team of industry professionals
has got you covered.
---------------------3D artist at
Ubisoft Massive
Five top tips for honing your hard surface modelling
Once you’ve spent time learning to
model and render, it’s useful to gain
an understanding of real-life
mechanics. This helps to keep your
own designs grounded in reality. To
do this, I find myself looking at all
kinds of references – for example,
commercial vehicle designs or mecha
toys. It’s important to not limit
yourself to your own field.
I feel that I do my best work under a
deadline and with restrictions. Art
competitions are a great way of
pushing myself to work quickly while
creating a design that fits someone
else’s brief. At the moment, the
ArtStation challenges are a great
place for this.
With mechanical design, it’s
important to follow a 70/30 rule with
any small detail work. I find that
increasing the amount of smaller
detail work is a great way of drawing
the eye to areas of interest and
creating an overall composition.
Modelling is an inherently slow way
to design versus drawing or conceept
art. It’s important to take time and
research the techniques being used
by other artists in order to make sure
you’re not slowing yourself down any
further than necessary. For me, the
biggest boost to my productivity has
been moving from Maya to Modo
and including a round edge shader in
my renders to avoid a large amount
of high-poly modelling.
I’ve been creating my best work ever
since I started to figure out my own
style. I used to worry a lot about what
others might want to see from me
until an experienced artist told me to
just focus on what I enjoy. Since
making that shift, it’s been far easier
to motivate myself and to create
better work.
----------------------nce 3D
characcter artist
How to keep your texture baking
and 3D skill set up to date
Claudiu Tanasie has been in love with drawing
for as long as he can remember but it was only
when he got his hands on a copy of 3ds Max in
college that his passion for 3D art was ignited.
It was pure chance that landed him his first
freelance role for a studio in Hungary. “It was
the first time I thought I could do this for a
living without starving,” he explains.
Since then, Tanasie has worked on the likes
of Call of Duty and The Witcher for AMC Pixel
Factory before forging a successful freelance
career that has seen him contribute to
Dishonored 2, Doom and Lawbreakers.
Knowing which software to employ for
specific challenges is crucial to Tanasie’s work
– “for example, Marvelous Designer for an
asset that has a lot of clothes and needs to look
hyperrealistic, or Fusion 360 for a robot that
would take forever to model with traditional
polygon modelling.”
Tanasie attributes his success to a ceaseless
hunger for knowledge that keeps him up to
date in all the major software and involves
watching hours of tutorials. He adds, “The
problem is not a lack of information, it’s about
how much time you are willing to invest.”
Learn how to produce quality bakes that will save you from a texturing nightmare
Decimate the high-definition
model Depending on the hardware
specifications of your workstation and the
complexity of your model, this step can be
optional or mandatory. Personally, I always
decimate my high-poly geometry. This is time
well spent, especially when you have to bake an
ambient occlusion or thickness map. I also name
my SubTools if I haven’t done so already.
Assign polypaint to the high-poly
model The polypaint assigned in
ZBrush will be used to generate an ID map. To
speed up the process, I activate Draw Polyframe,
select the topmost SubTool and press Cmd/
Ctrl+W a few times until I get a colour different
enough from the other SubTools. Then I press
Tool>Polypaint>Polypaint from Polygroups,
switch to the next SubTool and repeat. Once I’m
finished with all the SubTools, I go to the Zplugin
menu and choose Export from the SubTool
Master subpalette, to batch export them.
Offset the geometry I import both
the low-poly and high-poly geometry in
a new Maya scene. Select Move Options, then
set Step Snap to absolute and the value to 100.
This ensures that you will only move objects in
multiples of 100 units. If you lose some of the
files, this will make it easier to re-create them
later on. Go through all the pieces and whenever
two of them are close to each other, offset one so
that there’s enough space between them. Make
sure you offset the low-poly and corresponding
high-poly geometry the same amount, then
export the offset high-poly.
Load all the geometry into
xNormal Start xNormal, select the
High Definition Meshes tab, then drag and drop
all the high-poly geometry. Select the Low
Definition Meshes tab, and drag and drop the
helper low-poly mesh.
the standard workflow. We will generate an
object space normal map that we will later
convert into a tangent space normal map. The
reason this technique works is that an object
space normal map is somewhat independent of
the low-poly mesh used. Tick the Ambient
occlusion map too, press Generate Maps and
take a break while xNormal does its thing.
Bake a test normal map In the
Baking Options tab, choose the path and
size of the resulting texture bakes. For now, only
tick the normal map. This map is the fastest to
bake and is best suited to find out the optimal ray
cast distance. Then click Generate Maps.
Check for errors Use your viewer to
check for baking errors. Take your time
and inspect the entire model – this is the best
time to adjust the baking settings. If some details
of the high-definition model get cut out, go to the
Low Definition Meshes tab in xNormal to
increase the Maximum Frontal Ray Distance and
Maximum Rear Ray Distance. If some details get
projected in the wrong areas – one finger being
projected on a neighbouring finger, for example
– try decreasing the ray distances. In some
complex scenarios, you may not be able to find a
perfect value for the ray cast distance. When this
occurs, you can bake multiple versions of the
maps and combine them in Photoshop.
Bake the ID map Next is the ID map,
which we will use to easily mask
different parts of the texture. For xNormal to be
able to bake the vertex colour we assigned in
ZBrush, we need to uncheck Ignore Per-VertexColor for each high-poly mesh in the High
Definition Meshes tab. Make sure to uncheck
Normal Map and Ambient Occlusion, and then
press the Generate Maps button. The ambient
occlusion map results depend on the Ignore
Per-Vertex-Color setting, so if you need to go
back and generate another one, you will have to
reverse that setting, too.
Create a helper low-poly mesh
In order to get better ray casting, we will
add some subdivisions to the low-poly. This
geometry will only be used in the baking process,
so make sure you save a copy of your low-poly
mesh beforehand. Select all the low-poly objects,
then choose Add Divisions in the Edit Mesh
menu. Most of the time one subdivision level is
enough, but if you find you’re still not getting good
results, you can add another. Alternatively, you
can manually add some cuts in the problematic
areas but this is a time-consuming option. Export
this geometry as an FBX file and don’t forget to
give it a clear name so that you won’t mistake it
for the low-poly that will end up in the game.
Convert the object space normal
map to tangent space The last step
we need to do is to generate a tangent space
normal map from the object space normal map
and the low-poly mesh that we will use in-game.
Switch to the Tools tab and choose Object/
Tangent Space Converter. Set the path for the
low-poly mesh, the one used in the game and the
path for the object space normal map that we
generated in Step 8. Finally, set the path where
you want the resulting tangent space normal map
to be generated. Make sure Convert Object Space
to Tangent Space is checked and then go ahead
and press Generate.
Bake an object space normal map
and ambient occlusion map Once
you’re happy with the test normal map, it’s time
to bake some of the final maps. In the Baking
Options tab, click the green square next to the
normal map to open its baking options. Uncheck
Tangent Space. This is the biggest difference from
on artist
at Treyyarch
Ethan Hiley talks crafting weapons for the Call of Duty franchise
“Before getting into the industry, I used to
spend a lot of time experimenting with 3D
modelling in a variety of mod tools for different
games. From there, it just seemed natural to
progress towards developing games,” says
Ethan Hiley of his path into the games industry.
He began his career as an environment artist
but a fascination with hard-surface modelling
and building digital weapon models in his spare
time eventually led to a change in direction.
As Hiley shifted his focus towards props,
vehicles and then weapons, he found himself
overseeing the weapon art for Call of Duty:
Modern Warfare Remastered. He continues,
“Subsequently the opportunity presented itself
to join the awesome team at Treyarch, and it
was an opportunity I couldn’t pass up.”
Hiley describes working on the remaster of
one of his favourite games, Modern Warfare, as
an experience that was equal parts amazing
and stressful. He reflects, “We poured over
every meticulous detail ensuring that when
modelling, ZBrush for high-poly modelling,
Marmoset for baking and Substance Painter for
texturing. Much of his weaponry consists of a
variety of materials and finishes, each crafted
with the same attention to detail.
“When we texture our weapons, we treat
each part of it as if it were its own asset, from
the barrel all the way down to individual bolts.
This approach helps to define the individual
pieces and make the weapon feel like it was
assembled from a variety of parts rather than
one uniform material across the whole thing,”
explains Hiley.
“Often, if a weapon is textured with
everything receiving a similar material
treatment, all the details tend to blend together
and get lost. You lose all that construction
detail you built in and the materials don’t help
to give the weapon its character. By treating
each part as a unique surface, it helps to
highlight the variety of construction detail and
bring a bit more personality to the materials.”
nt Games
Mak Malovic, lead environment artist at Bluepoint Games,
discusses remaking a classic
“One of the challenges is bringing them up to
current-day standards but keeping the original
mood and feeling,” Mak Malovic says about
remastering beloved videogames. Malovic has
been in the industry for seven years, the last
four of which he’s spent as an environment
artist at independent videogame developer
Bluepoint Games in Austin, Texas. The studio
has now garnered a reputation for delivering
high-quality updates of much beloved games,
even being dubbed ‘masters of the remaster’.
During this time, Malovic has worked on
upgrades of Gravity Rush and the Uncharted
series but he admits that remaking the much
beloved Shadow of the Colossus was the biggest
challenge yet. “Since Shadow has such a large
and dedicated fanbase, we knew that any
changes we made would be highly scrutinised,”
adds Malovic. “We constantly had to ask
ourselves ‘Are we going too far? Does it still
feel like the original game?’”
The added pressure wasn’t helped by the
fact that this was the studio’s first fully blown
remake, which involved re-creating all assets
someone picked up any of the weapons, they
immediately looked and felt just like they did
back in 2007, but with the level of fidelity
capable on current consoles.”
One of his recent projects, Call of Duty: Black
Ops 3, features a variety of near-future
weaponry that required him to collaborate
closely with the concept artists that dream
them up. “It’s our job to realise the weapon in a
3D space and make sure it looks right in
first-person. The key is to maintain an intriguing
and iconic first-person silhouette,” he explains.
The process of achieving this begins with
drawing up a design profile for each weapon.
“This dictates how design expects the weapon
to behave in-game and what kind of players
they expect to use it,” adds Hiley. From this
point, Hiley and his team look towards
real-world weaponry to help determine
aesthetic and silhouette.
When it comes to his software of choice,
Hiley favours Maya for the majority of his
from the ground up. Malovic was responsible
for creating the incredible architecture seen
throughout the game. The process would
usually begin with him looking for inspiration in
real-world sources and a series’ newer titles,
but Shadow of the Colossus is a standalone game
so the latter wasn’t applicable.
He goes on to explain these early stages
further. “We create a beauty corner
representing the ‘new’ look and quality we are
striving for.” This starting point is also helpful
when it comes to figuring out an asset budget
for the team’s performance goals. “We also try
to understand how much more detail we can
© Activision
source art as guidelines. Once we had the
general blockout done, we would start to add
additional architectural elements on top of that
in Maya. The challenge here was trying to stick
to the collision bounds of the original game,
which is why we assembled the levels mostly in
Maya and not in our game engine. Having a
constantly growing Maya architectural kit
library helped a lot when we were getting
started with new assets.”
After this step, Malovic would utilise ZBrush
to give standout pieces of architecture the
character they have inside the finished game. “I
always start with a high-poly blockout done in
Maya and build off that. I would usually pick a
few things in any given level I thought were
‘hero pieces’ and sculpt those.” Malovic sped
up the sculpting process by employing a
number of custom brushes and alphas. Like all
artists, he has his preferences, “My favourite
brush is still Mallet Fast 2.”
Finally, Malovic has some tips to share with
those looking to achieve his level of texturing
detail. “Focus on the big reads first; once you
nail those, then go down to the medium, small
and finally micro details. Additionally, keep in
mind the rule of thirds, and try and think about
rhythm, not repetition.”
© Sony
introduce while maintaining the original mood
and feeling and without affecting how players
remember the game. With Shadow of the
Colossus, we fortunately had amazing concept
art which helped a lot.”
The creation then begins in Maya. “Most of
the time with remasters and remakes, we start
with the original source art. We treat it like a
highly detailed white box. Then we clean up the
geometry, if needed, so we have an easier time
adding more detail and definition.”
“In the case of Shadow of the Colossus,
because we knew we wanted to go further, we
had to rebuild a lot of the geometry, using the
I used a tool called World
Machine to create the
mountains in the background.
It generates a height map,
which I use to make
landscapes inside of Unreal.
I use colour masks a lot in
my work. They help to make
different variations on
plaster or other materials
very easily and directly in
the editor.
I’ve used material blending a lot.
It’s a really cool feature of Unreal
Engine that helps to avoid tiling
and bring more variety to
surfaces. It works especially well
on large areas like this level.
----------------------Coffounder and
art director at
Arttcore Studios
A guide to starting your own outsourcing studio and building
atmospheric environments with Denis Rutkovsky
Can you tell us a little about how and why you
started Artcore Studios, your game art
outsourcing studio?
I spent many years working on big projects
with various studios and for a large chunk of
this time I was the lead artist of the Insource
team at Crytek in Kiev, Ukraine. After that I
moved to the UK, where I had the pleasure of
working on Batman: Arkham Knight with
Rocksteady Studios. It was towards the end of
this production that I came up with the idea of
starting my own studio.
Artcore Studios began as a small team of
two artists, managing to get clients and build a
base without any external investments. Now
we have a cozy office in the city centre of Kiev,
with a team of ten talented artists. The best
part is that we grow and gain experience every
single day.
For this level I have built an
asset library of around 130
pieces including static
meshes, decals and organic
elements such as rocks.
What are some of the advantages and
disadvantages of being a freelance artist?
Being a freelancer might not be as financially
stable as working for a studio but you do have
the advantage of being able to manage your
own time. If you do it right, you can make more
money than a studio would offer you. You’ll
also develop a range of skills separate from the
art like managing your finances.
A big downside is that you won’t be working
with so many like-minded people. This is what I
miss the most about working for bigger studios
– you can’t put a price on inspiration from your
colleagues and friends. Working freelance is
great but it is always better to make a start with
studios as it’s the only way to develop your skill
set properly. There are so many fields linked to
game art that would be impossible to learn
about on your own.
How do you go about creating the mood of
your work?
A good starting point for the mood of an
environment is deciding which kind of lighting
to use – sunlight or artificial sources. I don’t use
many effects in post-production as it affects
the mood too much. This should be created
with the lighting setup.
Can we get some top tips for lighting?
Even after making so many environments with
lighting set-ups, I still consider myself
something of a beginner in this area. Having a
good contrast between light and dark areas is
something I have found helpful, as well as
trying to avoid having any complete darkness in
an environment. I would also suggest playing
around with the fog in Unreal Engine as it has
huge and often positive effects on lighting.
Mother by Natalia
P Gutiérrez
There are many advantages to rendering in
real-time and with recent advances in both
software and computing power, it has
never been more accessible
eal-time rendering technology has
improved by leaps and bounds in
recent years and are now incredibly
powerful tools are at our fingertips. We can
achieve remarkable results using software
that’s available for free – neither Unreal
Engine 4 or CryEngine charge you to use
them, and you just pay five per cent of your
revenue when you publish. Unity is also free
for personal use. If you haven’t done much
real-time work yet, now may be the perfect
moment to start!
Over the following pages, we have tips and
tricks from some of the top artists working
today that will help you create incredible
characters and environments in real-time.
Our writers have discovered these shortcuts
from many hours of trial and error, and now
you can reap the rewards.
For example, you’ll learn how to
imperceptibly re-use the same assets in your
work, making your scene faster and lighter
and saving you time. You’ll also receive
forewarning about the kinds of traps you can
fall into by not setting your scene up properly,
and advice on how to get things right the first
time so you avoid having to make a lot of
tweaks further down the line.
These are all things that other 3D artists
have learned the hard way, so you don’t have
to. You’ll also learn about our artists’
favourite nodes and shortcuts that, when
used correctly, can really add an extra special
level to your work.
Environment artist
Senior cinematic artist, Crytek
Freelance character artist
CEO & interaction designer
Digital sculptor/character artist
Freelance senior environment artist
Lead artist, elite3d
Environment artist, Ubisoft Massive
Character artist
The Unreal Engine 4 Cine Camera
Actor is great for re-creating the cameras you
see in films! You can select a preset that mimics
cameras like an IMAX 70mm, or set up your
own. I usually aim for a 35-55mm lense for an
environment scene. Karen Stanley
Jody Sargent teaches us how to create natural-looking grass
The directional light intensity is set to
10 by default (in version 4.18 and below) when
you drag it in – but that can be too strong
depending on the kind of scene you are
creating. 3.14 is a good value for a grey overcast
day and up to 10 is great for a hot, hot desert. In
Unreal Engine 4.19, this has changed to using
Physical light units, now making this even easier
to set up! Karen Stanley
Try to choose a suitable cube map that
fits your scene as best you can. There is no
point using a night-time HRDI if you are trying
to create a realistic an orange desert scene. You
can also adjust the amount of bounces that the
skylight supports during a lightbake in World
Settings>Lightmass. Karen Stanley
As Unreal Engine 4 uses a PBR
system, its lighting relies heavily on materials
being correct. Try to make sure your albedo
values are right for the material that you are
representing. If it's too dark, you may find your
light bounces during bake do not have much
effect and you may also find your roughness not
reacting to light correctly, too. Karen Stanley
Post-processing tools like Colour
Lookup Tables are powerful for adding that final
polish to your scene. But make sure not to jump
into this before you have your materials and
lighting all set up correctly as this can skew your
results and you might end up with something
rather funky. Karen Stanley
Create the grass Let’s start off by
figuring out which pieces of foliage you
will be needing for your biome. Once this has
been decided upon, you can now create the
different bits of foliage that you will need and
edit all of the vertex normals to be pointing
straight up so that there will be no harsh
intersections with the ground inside Unreal
Engine 4. Now let’s go ahead and assign vertex
colours to the model so that the base is black
and the tips are white for use when setting up
the grass movement.
Devise the grass types Create a
Landscape Grass Type by rightclicking in the content browser and finding it
under the Miscellaneous tab. Click on the +
symbol to add a new element. In the Grass
Mesh slot, add the grass mesh that you have
created. You can add multiple meshes to this or
even make new grass type actors to control
their parameters and where they spawn on the
terrain separately.
The material Make a masked
material with the two-sided foliage
shading model and plug in basic diffuse, alpha
and normal textures. Plug the diffuse into the
subsurface colour with an optional multiplier.
Create a Simple Grass Wind (SGW) and
connect it to World Position Offset. Add a
Vertex Color node and plug the red channel into
the Wind Weight of the SGW. Create two
constant vectors for the Wind Intensity, Wind
Speed and Additional WPO, and tweak the
values to control the effect of the wind.
Tweak the grass parameters
Now alter the parameters in the
Landscape grass type actor. I set the min and
max sizes to around 0.9 and 1.1 for some scale
variation. For the main grass, leave the density
at 400. I also have Random Rotation and Align
To Surface checked. Now you are free to set up
any other grass types and build up your scene.
Combine them with foliage painting to build up
unique and interesting areas in your level.
When building larger levels with many
props that are instanced, make sure that you
have set up your pivots in a logical way and that
you keep them in the same place throughout
updating the mesh itself. This removes the need
to go in and adjust all the locations inside your
game engine. Timothy Dries
In Unreal Engine 4.19, the way in which
volumetric fog and lighting works has been
revamped and it now more accurately depicts
changes in darker areas and affects the fog in
more realistic ways. This can be an expensive
feature but it’s definitely worth giving it a try to
see if you can add more dynamics to your
scenes. Timothy Dries
Missing dynamic global illumination coming
from Unity? You can enable a feature inside the
consolevariables.ini called Light Propagation
Volumes, or LPV. This will simulate Real-Time
Global Illumination in Unreal Engine – but it’s
important to remember that it’s still in test
mode, so make sure that you’re careful when
you use it. Timothy Dries
Keep the poly count low and use detailed,
well-designed textures. Be mindful of how you
rig and animated your characters – try to use
less than 40 bones. Emilie Joly
Make use of the Unreal Engine
post-processing volume and be sure to set it to
Unbound to influence your entire scene. A
Lookup Table allows you to take a screenshot of
your current scene, make adjustments in
Photoshop, such as improving the contrast and
other adjustment layers, and then bring those
back into Unreal. Timothy Dries
Use intuitive ways to guide your user through
the experience, using sound, lights and cues.
They need to be able to understand what to do
without any explanation. Emilie Joly
One of the best ways I've found to get
photorealistic results is to light scenes using
HDRI maps. Instead of a flat skylight, HDRIs
add details that help to fake the bouncy look
When making scenes, do you find that
your light seems to change whenever you exit
or enter darker areas? This is because Unreal
uses a feature called Auto Exposure or eye
adaptation. You can find this inside the Post
Process Volume. Set both the high and low to 1
to enjoy consistent lighting. Timothy Dries
When it comes to Unity plugins, download
SpatialStories, which you can use to transform
any 3D assets into interactive objects. It’s an
easy and simple way to start creating your own
virtual (VR) or augmented reality (AR)
experience. Emilie Joly
Fathomless Nine - Subaqueous
Nauticon by Joe Tolliday. Concept
by Lewis Jones
A good immersive experience needs to be
iterated on. By having people test it from day
one, you will be able to see flaws quickly and
change them. Emilie Joly
On the import settings of every 3D model you
make, only leave the Optimized Mesh toggle on
and the Optimized Game Object in the
animation section. Your performances will be
much better. Emilie Joly
Cloud shadows are a great way to give
your real-time environments some varied light
values. These larger gradients aren’t something
that you usually see in real-time renders and
they are able to give the scene a much more
natural feel. In real-time, be sure to have a few
clouds in the sky. Joe Garth
It’s okay to use the same meshes over and over
again. If you have a detailed 3D asset such as a
photogrammetry rock, you'll find that it can look
totally different depending on how you rotate,
translate and scale it. The other advantage is
that duplicated assets use the same texture
maps so you're saving GPU memory by re-using
the same objects. Joe Garth
Zaquida by Marlon
R Nuñez
you usually get from a raytracer. In CryEngine,
this can be done by generating an environment
probe. Joe Garth
Forest Scene
by Joe Garth
CryEngine’s volumetric fog
implementation has the capacity to look
stunning. The sun will create convincing light
beams across the scene and pointlights are also
able to affect the fog. This feature is definitely a
great way to bring some depth to your artwork
in CryEngine. Joe Garth
Assets make up 80 per cent of an
artwork’s look – all the fancy rendering in the
world won't save you if your assets aren't up to
scratch. When building environments, make
sure that you're using the very latest techniques
to create assets, or make your purchases from
an asset store that uses the latest methods.
Make sure to use photoscanning, procedural
generation, LIDAR data and photo textures – if
you want to create realism, you need to start
with something real., Quixel
Megascans and now my own all
aim to provide artists with easy access to
real-world data. Joe Garth
I always wanted to use alpha brushes directly
on terrain in CryEngine, so I asked for the help
of Crytek's principle 3D engine programmer
Vladimir Kajalin. Thankfully, Kajalin was able to
create a system in CryEngine that allows the
user to place Decals that displace the terrain in
real-time as well as projecting albedo and
normals. This means that you can place an
entire mountain as a Decal and then blend
multiple terrain brushes together in real-time.
It’s really the best of both worlds – the detail of
real-world terrain combined with a flexible,
easy-to-use system designed for artists. Make
sure you look out for this new feature in
CryEngine 5.6! Joe Garth
Character artist Natalia P Gutiérrez explains how to use omni lights in Marmoset Toolbag 3
to create focus and accentuate the strongest parts of your model
I often see real-time scenes that are
quite small in size, and I have realised that this is
actually mostly due to a lack of assets that hold
up at larger scales, rather than a game engine
limitation. After all, huge scenes are CryEngine’s
forte! To solve this problem I've been developing
a new library specifically for large-scale terrain
pieces, which can be found at The
brushes are generated using light detection and
ranging, or LIDAR, data and then I texture them
using aerial photography. Every brush in the
library has an 8K resolution. Joe Garth
If you intend on using transparency and
refraction-based effects in Marmoset Toolbag
Import your model Start by taking
your model into Marmoset Toolbag 3
and check that the scene scale is correct. You
can do this by clicking Scene on the left menu
and then ticking the Show Scale Reference box
under Scale & Units. If the scale of your model
is incorrect, this is easy to correct – just use the
Scale field.
Omni lights Now that the camera is
locked in, I made my background dark
and started working on the lighting. This
creature has intricate shapes and I didn’t want
them to get lost, so I created omni lights in the
areas I wanted to have accentuated. I started
with the back, and I tweaked their shape and
settings along the way.
Dramatic lighting With all of these
lights combined, I got my final result.
What I generally try to achieve is dramatic
lighting with shadows so that the model doesn't
look flat (a top spotlight can help) and rim or
omni lights to bring out its silhouette and little
areas of interest. The brightest area of your
model should be the centre of attention.
Plan your composition Once
Accentuate the face I also created
you’ve plugged in your textures, it's
time to find a nice angle, lighting and
composition to make the model really pop out.
This is a beauty render so I created a close
camera focusing on the upper half of the
creature, decreased the brightness of my Sky
Light and added an omni light in front of it.
a set of omni lights on the model's
left-hand side. I wanted her face and the area
around it to be the focus of the image, so I
created another set of rim lights here to
accentuate her silhouette. I then switched the
main spotlight off so that you can see the effect
a little bit better.
Post-production Now it's time to
play with the camera settings and do
a little bit of post-production work. For this
piece, I added Depth of Field and a Vignette.
You can also try to tweak the camera's curves.
For the bubbles, I used the Glass Simple
material preset and changed its transparency
mode to Refraction.
– like Newton's rings, for example – it’s a good
idea to give such an object its own texture set
and material. In my experience, not doing so can
actually result in some interesting graphical
errors. Joe Tolliday
Fog, DOF, Bloom and a number of
other effects are cool but they’re easy to overdo.
If you're making a folio piece, the focus should
be on presenting the character, not the scene or
mood. Be subtle in your usage of those effects
or don't use them at all. You can go nuts on your
beauty shot, though. Joe Tolliday
The HDRIs in Marmoset Toolbag have
their own special file extension. If you have a
favourite that you want to add to the presets,
you'll have to import your HDRI, export and
save it, then copy and paste that file into the
start-up directory. Joe Tolliday
Scale is super important in Marmoset
if you want to use certain features like
subsurface scattering. If you're not sure your
character is the correct size, there's a handy
scale reference you can turn on by clicking on
Scene in your hierarchy (not the scene menu).
Try to get into the habit of checking the
character's height and scale before exporting it
into Marmoset. Joe Tolliday
This is a little pet peeve of mine but if
you're exporting a Marmoset Viewer file to
share online, make sure that your character is
centred so people can easily navigate and
inspect your mesh. You don't want to annoy
your potential employers with stuff like this. If
you have a cool idea for composition, save it as
an image. Joe Tolliday
I like to set up my HDRIs quite low – a
value around 0.4 is normally a good starting
point. From there, I create my light rig using
spotlights and playing around with the width,
shape and length values to get softer shades. A
value around 25 for each of them usually gives a
good result. Marlon R Nuñez
In Marmoset Toolbag, I find that I get
better results when I’m importing with FBX
format. It can also help you to play animations,
in case your character is rigged and animated.
You should keep in mind that you need to
update all the smoothing groups if you are
exporting from 3ds Max. Marlon R Nuñez
For the eyes shader set-up, make sure
you enable Parallax under the Surface tab. This
will help to re-create the cornea IOR and will
also improve the speculars and reflections from
it. You can create a normal map for it quickly in
Photoshop. Marlon R Nuñez
Using a modular approach to build your
environments is always good. However, make
sure the modular system that you’re designing
caters for all your needs going forward in the
Artstation Challenge - Ancient
Civilizations: Lost & Found entry
by Timothy Dries
very early stages of your project before
committing into details. Testing out all
combinations and possible usages in the initial
stages of your project can save you a lot of
headaches later. Bjorn Seinstra
Amazon Lumberyard lends itself to be
very much a real-time ‘what you see is what you
get’ type of workflow. Once you have the basic
blockout of the scene in the editor, the engine
will then allow you to quickly iterate on your
establishing shots, whether they are complete
fly-throughs, stills or cinematic compositions.
By adding the cameras to the bistro scene
(above) in the initial stages of the project, we
were able to quickly make any changes that
were necessary to the artwork and lighting
without losing any of the precious production
time. Bjorn Seinstra
Planning your shader usage for a
run-time environment is vital. Always reuse as
many things as possible. Setting up a master
shader helped us enormously in controlling
consistency and allowed us to push general
visual changes very quickly. Bjorn Seinstra
Expert advice from industry professionals, taking you from concept to completion
All tutorial files can be downloaded from:
Total War:
Warhammer II
– Female Dreadlord,
ZBrush, Marvelous Designer,
3ds Max, Photoshop, Quixel
Suite, Marmoset Toolbag 3
Learn how to
• Translate 2D concept art
into 3D game art
• Develop a functional base
mesh for sculpting
• Create a ‘bits box’ for use on
multiple assets
• Unify the look and feel of a
Total War: Warhammer race
• Topology rules and
edge flow
• Create an armour kit for
your character
• Primary, secondary and
tertiary forms, and applying
the rules of composition to
your sculpt
• The art of exaggeration, and
how to get the best out of
your hard-surface bakes
• How to tell a story
through detail
• Create game-ready hair
the easy way
• Let your textures do the
hard work for you
Rich Carey created the female
Dreadlord concept. We work
with Games Workshop to
establish a concept that
suitably represents their IP
while also fitting well within
the Total War universe.
Create a Dark Elf
Discover Creative Assembly’s triple-A techniques for realising a Games
Workshop character in 3D with character artist Danny Sweeney
his tutorial concentrates on the development of the
Dark Elf faction from Total War: Warhammer II – in
particular, the Female Dreadlord unit. When
establishing a new race for Total War: Warhammer, we need
to build a strong visual foundation from which the artists
can reference before full development can begin.
The introduction to this guide will look at the preparation
that is required before we can start developing the bulk of
our character art, as well as the techniques we use to help
ensure that each character within a faction looks
consistent and of a high quality.
Following this, we will talk in depth about the methods
we used to create functional hard-surface armour while
re-creating the Dark Elf visual style in 3D, taking weight,
proportion and gestures into consideration as we build our
assets. We will also talk about getting the most out of your
normal maps, how we create different armour variants for
each character, using the rules of composition to add life
and story to your character, and easy methods to create
real-time hair and fur.
Develop your base mesh When you start creating a
character, you’re going to want to build a strong base
mesh that other artists can use to develop their sculpts
from. In this case, all female elves use the same female elf
base mesh. Using ZBrush, start with ZSpheres and block
out the character. At this point, we are focused only on
proportion. Having a proportionally correct base mesh to
work from will speed up the final sculpting process and it
also gives our riggers enough information to start looking at
building an animation skeleton. Once you have a solid
starting point, convert your ZSphere rig into geometry and
begin your sculpting, again focusing on the scale, proportion
and silhouette of your character.
Even flow – why topology matters To control how
much fidelity we can have when it comes to
sculpting, we create our final base mesh using a method
called retopology. To make sculpting easier, keep your
polygons evenly sized, four-sided and use edge loops
extensively. If you don’t, you will probably end up with
pinching in parts of your sculpt, which can be cumbersome
to deal with. Once you’re happy with your mesh, bring it
back into ZBrush. Creating polygroups for each body part
will help to break up your character during the sculpting
process and this is the point where you want to start looking
at anatomical structure. Don’t worry about skin pores or
blemishes just yet – focus on bony landmarks, fat pads,
musculature and striations.
Creating your ‘bits box’ When you start developing
art for a new group of characters, like we do for
Warhammer, it’s important to research what type of clothing
and armour the group wears, what trinkets they carry, and
other visual elements that they share across units and roles.
Creating a bits box – a file populated with assets that can be
reused across multiple characters – not only saves time, but
it also aids in establishing a sense of visual unison across a
group of characters. Use these to store trinkets, buckles,
belts, chainmail and scalemail links, and other assets that
can be re-used wherever possible.
• Tutorial screenshots
© All images © Games Workshop Limited 2017. Published by SEGA
ZSpheres and DynaMesh
ZSpheres are a great way of blocking out characters
or props in ZBrush. Used in conjunction with
DynaMesh and ZRemesher, you can base meshes
with evenly spaced quads very quickly. Using these
three techniques together can help you create a strong
starting point for your sculpt.
Read your concept Translating a concept into 3D
can be difficult, so take the time to review your
concept. Make yourself aware of the shape and design
language of your character before you even start considering
building it, and analyse how the elements function and how
each element should be constructed. Solving these
problems earlier will save you a lot of headaches later on in
the process. At this point, make a note of any design
elements that may not work in 3D, or are nonfunctional or
impractical. Either change them to make them work or
research some alternative avenues of design before you
begin the design and development.
Consider the flow Before we start to build our
characters, we need to consider a few rules of
composition. ‘Flow’ describes the way you direct your
viewer’s eyes around the focal points of your artwork. For
most characters, the focal point is usually the face –
however, this is entirely up to the artist and their intentions.
The goal is for your artwork to tell a story, so highlighting
visual narrative beats on your character will make the ride
towards the focal point as exciting and smooth as possible.
Think about your forms Silhouette is possibly the
most important aspect of your character. This is
your primary form. Your secondary forms should be made
up of smaller shapes that can still affect the silhouette, and
tertiary forms are your finer details that shouldn’t affect
silhouette at all. As you work, keep making sure that you
check the flow, weighting, proportions and gestures of your
character before moving onto the secondary or tertiary
forms. It can be harder to retroactively change proportional
issues further down the line.
Creating variation in the armour
sets allows characters to stand out
from each other
Chaos theory Chaos is our friend when it comes to
applying secondary and tertiary details. You want to
cluster these forms in areas of interest to attract the viewer’s
eye, but the application of these details should not be
orderly or uniform because this can make your detailing look
too forced and unnatural. Try to avoid repetitive noise and
create varied forms that are interesting to look at – this will
make your composition much more successful. These rules
of composition are not to be followed dogmatically, but we
need to understand what they are and where we should
implement and potentially break them as well as the effect
that they will have on our artwork.
Build hard-surface geometry Hard-surface art
usually refers to the creation of man-made objects
such as guns, armour and cars. It is quite different to
sculpting. Using your 3D package of choice, start to build
your character’s hard-surface elements using simple
geometry. As with sculpting your character, the goal is to
start off with a blockout before you start adding detail. At
this stage, you can use techniques like control loops, hard
edges or smoothing groups to help retain the shape of your
geometry when you start to subdivide your mesh. Ensure
that your edges have some soft curvature to them otherwise
you will run into issues when baking them down to a
low-poly mesh. Once you are happy with your geometry,
import it into ZBrush and prepare it for sculpting.
Smart variation In our game, creating variation in
the armour sets allows characters to stand out
from each other, even if they share a lot of similarities in the
bulk of their outfits. In this case, we created a base Female
Dreadlords kit that is shared across the unit, achieving
variation in pauldrons and head armour. These areas greatly
affect the character’s overall silhouette, so by changing
them we can achieve visual distinction between individual
units while still making them aesthetically similar. Between
these variations, elements such as the breastplate, gauntlets
and even her face are re-used – it’s almost impossible to tell
that these assets are the same in the final renders because
there is so much variation elsewhere.
Detailing your cloth
Other characters in Warhammer, like the Skaven, wear
very old, ripped garments. If you want to achieve this
kind of aesthetic then add articulation points to your
pattern, creating tears and inner holes in your
simulated mesh before exporting. Import your
simulated mesh into ZBrush and using the Clay
Build-Up brush in a low opacity with the gradient
alpha and spray mode enabled, start to add wear and
tear to the edges of the cloth. Once you’re done, use
the Standard and Inflate brushes to add micro fold
detail. Finally, use the Curve Tubes brush to create
frayed strands of cloth. This will help your cloth
elements appear old and ragged.
Organisation is key
Organisation and professional etiquette are very
important factors to becoming a successful artist.
Adhering to naming conventions and structuring your
files makes it easier for other artists to fix bugs or
make edits to your art in the future. In a professional
environment this happens frequently, and the last
thing you want to do is make someone else’s life more
difficult by being messy and disorganised!
Maintain your visual style Whichever methods you
choose to create your character’s armour sets, try to
maintain consistency in your approach and execution as this
will help the different elements look visually similar. Always
search for ways to propagate visual hooks within your
design – this is where your bits box might come in handy.
Taking care to maintain your visual style across the
character variations and the rest of the characters in your
faction not only amplifies the faction’s visual style, but also
helps to ground them in their own reality.
Create cloth and drapery Marvelous Designer is a
fantastic program for generating cloth through the
creation of sewing patterns. We import the character’s base
mesh and then start blocking out the large forms of clothing,
testing properties of different materials as we go. Work with
a high-particle distance at first to save computing power,
and research real-world examples of garment and sewing
patterns to get you started. Once you are happy with the
primary forms of your simulation, lower your particle
distance so that you have a high-quality cloth simulation
before exporting. You are going to want this mesh to consist
of evenly spaced quads for sculpting on. If you have a newer
version of Marvelous Designer, you should be able to export
your mesh as quads. If not, then use ZRemesher to evenly
quadrify the mesh in ZBrush before sculpting.
Make the hair cards Create a base or helmet of hair
that will serve as a canvas for you to apply your hair
planes to when it comes to your low-poly. Next, create two
or three flat planes and using the Curve Tubes brush in
ZBrush, start to create varied hair clusters on each of them.
It’s important to make sure that your hair planes differ in
length so that you can get the most disparity when it comes
to setting up your hair in the low-poly. We can also make fur
cards for creatures using this exact same method, though
the cards should be smaller and shorter, and the
arrangement of hairs should be scragglier.
Taking care to maintain
your visual style across
the character variations
and the rest of the
characters in your faction
not only amplifies the
faction’s visual style,
but also helps to ground
them in their own reality
Take great care when
developing your low-poly mesh –
you’ve spent so much time
sculpting and building your
character, so you want the best
results possible to showcase all
your hard work while developing
geometry that riggers and
animators can work with
Sculpt the lizard scales To create convincing-looking
lizard or dinosaur scales, start by sketching the
demarcation lines between the larger forms with the Dam_
Standard brush, paying careful attention to flow and form.
Next, sculpt in the larger, primary scales and allow the
shapes of the secondary and tertiary forms to flow from
them. We are following the same rules of composition here,
but thinking and applying it in an organic manner. Starting
out with the larger forms and working our way down helps
to make this daunting task become so much easier, and it
also yields fantastic results.
Create your lowpoly mesh It’s very important to take
great care when developing your lowpoly mesh
– you’ve spent so much time sculpting and building your
character, so you want the best results possible to showcase
all your hard work while developing geometry that riggers
and animators can work with. Give your hard-surface
elements enough geometry to get the best out of your
bakes. Take care to give overlaying elements, such as belts,
similar geometry to its underlying topology and create
functional topology around joints such as the legs and
elbows. This will help riggers and animators avoid bugs
when animating the mesh later on.
Understand normals It’s imperative to understand
proper normal mapping theory before baking down
your asset. To get the best possible bake, you need to create
hard edges and smoothing groups where your geometry
starts to angle at 90 degrees and below. These are the
places where the smoothing on your geometry starts to
degrade. These changes need to be propagated in your UV
maps, ensuring that areas where you have separated your
normals have their own UV shells and given enough space
for padding in UV space. This is important to avoid baking
errors and to create seamless, perfect normal maps.
When sculpting wears and tears in fabric or scratches
in objects like real swords, consider the storytelling of
their application. How did the character receive this
damage? Would the attack propagate any further
imperfections anywhere else on their body? Is the
material responding in a realistic, convincing manner?
Following these rules when detailing will help you not
only to tell a story through your sculpt, but also further
enhance your sculpt’s strong composition. As such,
you should consider the rules of composition when
you create these details.
Smart UVs Try to be smart about the application and arrangement of your UV map.
For example, the Female Dreadlord only needs space on its UV map for one version
of each pauldron since it can be mirrored over to the other side. This will help you save
texture space and allow you to get the most out of your textures. Try to make the most of
your UV space and be mindful of being wasteful.
Danny Sweeney
A character artist working at Creative Assembly on
the Total War: Warhammer team, Danny spends his
spare time sketching, sculpting, listening to
podcasts, and improving his workflow and art.
Total War: Warhammer II – Skaven Grey Seer, 2017
3ds Max, ZBrush, dDo, Photoshop,
Marvelous Designer
The Grey Seers are powerful sorcerers, capable of
channelling eldritch energies in destructive ways like
levelling armies with lightning, or summoning swarms of
rats. As chief agents for the Horned Rat, only a fool would
ignore their counsel.
Total War: Warhammer II – Alastar,
The White Lion, 2017
3ds Max, ZBrush, dDo, Photoshop,
Marvelous Designer
Alastar the White Lion was a character created for a fan
of the game through the Make-A-Wish Foundation. It was
an honour to create this character for one of the
franchise’s biggest fans.
Textures and material libraries When you’re happy with your UVs and bakes, texture
your character in your chosen texture toolset, such as Substance Painter or Quixel
Suite. You can really let your textures do a lot of work for you here and if you’ve generated
your maps properly then you should be able to easily create surface details such as
scratches, grunge and grime. Remember that you can save these materials if you’re happy
with them – this is particularly useful if you, or another artist, plan on creating another similar
character in the future.
56 All tutorial files can be downloaded from:
Total War: Warhammer II –
High Elf Spearmen, 2017
Maya, SpeedTree, Arnold
High Elf Spearmen are an elite, disciplined line of warrior
that puts even the most battle-hardened Human fighters
to shame. After years of training, a regiment of High Elf
Spears becomes a finely honed killing machine.
Monowheel, 2017
Model an adaptable
Learn how to turn a 2D concept into a 3D model that can be
used both in games and cinematically
Modo, KeyShot
Learn how to
• Model and design based
on a 2D concept
• Convert a model into
both cinematic and
in-game assets
• Create our own HDR
map for KeyShot
This piece was made for
Wolfenstein II: The New
Colossus. It’s the standard
motorbike that was used by
the German Army.
he idea behind this tutorial is to give you an insight
into how to make your models in an easy and quick
way. We will go over how to convert a standard
motorbike idea into a 3D design that can work for
animation purposes and function in both cinematic and
game environments. This tutorial also comes with some
extra modelling tips and tricks to make your life easier and
to help you work in a nondestructive way.
We will be using 3ds Max but you could use any 3D
software package. It’s also important to note that this
tutorial isn’t specifically centred around the Wolfenstein
universe – I have tried to keep it as broad as possible so
that you can apply these tips and tricks on your own work.
This tutorial is made for people with a basic understanding
of industry terms and those who are familiar with the
production pipelines of game assets.
To start with, we will go over some quick methods to
make your model lowpoly-friendly so that it is easily
convertible and then we will explore some quick
presentation passes in KeyShot before finishing it up in
Photoshop for presentation.
Character mocap When it comes to such an
important asset, it can never hurt to double-check
everything. To do that, you can always use a motioncaptured version of the character. If you don’t have one of
those, you will be able to use a biped in 3ds Max as these
are automatically rigged and skinned. Just place it into the
correction position. But keep in mind that this is just a
skeleton so imagine some extra space for clothes – for
example, if the hands can reach the steering wheel or if the
character can reach the ground when he sits down.
Find your style When you work on a specific
universe, it is important that every asset shares the
same visual language. For example, there are two elevators I
made – one for Doom and the other for Wolfenstein. The
Doom one has a lot of sharp, bevelled shapes with big,
shaped details, while the one that was made for Wolfenstein
(and is pictured) has a lot of round, bulky shapes that are
supported by bolts, rivets and a lot of small details.
Mocap and animations The first thing that needs
acknowledgment is that this is a not a static asset
– this dynamic model needs to be animated for different
cutscenes. Let’s start by defining the element that will be
intractable so that we won’t end up stitching pieces that
should be moving together. In this case, we want to make
the steering wheel and the tyre. We also want the kickstand
of the motorbike to be functional and it can’t be obstructing
the driver when they are sitting down. It’s important to
prototype all of these – that way riggers and animators can
start on a motion capture.
• Tutorial screenshots
Signify key elements When making a specific
vehicle like a bike, it’s important to find the assets
that will make it feel both believable and authentic – it can
never hurt to even exaggerate these shapes. In this case, I
went for nice big, curved exhausted pipes and a double
engine cooler on both sides. To make if feel like the 1960s,
we should use one big headlight on the front of the vehicle.
Every bike needs a steering wheel, of course, so let’s use the
same principle: a nice curved, exaggerated shape.
Machinery support Now that we have the basic
motorbike elements defined, we need to support
them with a lot of detailed pieces and that’s why it feels like
they blend into the main structure. Note that it’s important
to use the same materials on all these pieces so nothing
stands out too much. You can see those more as ideas to fall
back on when you are out of inspiration. The more different
shapes you have, the better. They should just be there to
bring the whole concept together. Personally, these also help
me to build up confidence for big shapes later. If you make
some good kitbash pieces, it’s really easy to prototype some
different designs in no time.
Instance tilled models It can be annoying to work
on non-straight models. Your first approach should
be working with edges constrained, which means that the
edges and polygons will move on the existing edges and
surfaces. You can also work on a local or screen axis. In
Modo, you can even align your workplane based on a
selection. But if you want an easy solution, use the following
method in 3ds Max. Position your model in a straight line,
then follow it up by making an instance and place that one
into the correct position and rotation – that way you can
work on both models while they update themselves. This is
a great way of experimenting and seeing how things look in
different circumstances.
Cinematic versus in-game
Our lowpoly tyre has its own UV set and this is
because it is easier to make a separate version without
having to redo the complete UV set of the vehicle. You
also won’t have to rebake the whole vehicle. It’s much
easier to make different dirty versions of the tyres
later. While making the cinematic asset, remember
that to keep the unique and expensive pieces in a
separate bake file so that you just need to optimise
them. At the end, you just add them to the base again.
Check the camera It doesn’t hurt to check where
the camera angles are during the cutscene. Sure,
when this model is used as a static prop in game it has to
look good from every angle, but the player is restricted to
certain camera angles. The camera can be placed anywhere
during a cutscene, like a pan from the bottom of a vehicle to
the top, so let’s check for annoying camera angles. In this
case, we have a still, close-up shot from the tyre so let’s
focus on this part as an in-game, low-poly example.
Tyre pattern Every monowheel needs a tyre, of
course, so let’s start with this. We need to come up
with a pattern that has enough depth that it pops out in the
cinematic version. However, for the in-game rendition we
need to be able to bake it in big shapes – think boxy – or you
might need to be able to bake it completely flat. Finish it off
with some cuts in the big shape to make it pop out a bit
more. For the cinematic version, always put mountain bike
pins on it just to add that extra touch.
Create a tyre module Let’s make this tyre pattern
modular because that way we only need one
segment and then we just repeat it. Using this method
makes it easier to put a lot of texture information into a small
part and then we tile it over the whole shape. The best way
of doing this is by making it flat in all axes and later bending
them in every direction, one at a time. It also makes things
simpler when it comes to making the cinematic and
low-poly versions at a later point.
Tyre details Now that we have got our tyre, we need
to match it up with the main shape, so let’s add some
extra shaping around it so that it will snap nicely with the
module. Try to shift the details to the outside so that they
contribute nicely to the final product. If possible, add some
shapes that stick out from the main body so that we don’t
end up with a flat shape – but, of course, don’t overdo it.
Finish it off by adding some small details with another colour
so that the tyre doesn’t become too monotone. In this
particular case, we just cut inside the support rubber and
add some copper twisted wires. These will pop out nicely
when they catch the light.
Low-poly friendly What we want to create with this
tutorial is being able to use the same model both for
cinematic and in-game purposes, so we want it to be super
detailed – but that doesn’t mean we can’t be smart about
handling it. Basically, we want to start adding extra shapes
to make it look even more detailed. Let’s take the cooler, for
instance – we add extra plates so that we can bake it as one
solid mesh. Another example of this method is the radiator
next to it. We don’t remove details in the low-poly version
– we just add details in the high-poly so that we can support
these complicated shapes with a cheaper budget.
Paint in the details Sometimes it’s just easier to add
more details to your texture than actually modelling
them. There could be different reasons for this like the fact
that it takes less time, it could destroy your topology, baking
it would make it too blurry or you’ve just come up with some
new ideas. So let’s paint them using Substance Painter. Most
of the time I just add extra height information – for example,
for painting in holes and bolts. But let’s take another object
as an illustration: the front plate. Creating this plate would be
quite a lot of work, especially if we wanted to keep the same
topology as we have for the chassis, so we create this in
Substance Painter instead.
Explore the main shapes Of course, we should have
some empty big shapes for the chassis but that
doesn’t mean that they shouldn’t have an interesting
silhouette, so let’s search for these. Like I have mentioned
before, Wolfenstein is all about nice curves with small details
in them. It isn’t a bad idea to cut a giant shape into two big
shapes and then make them look like they snap together.
Next to all of this, it’s not a bad idea to just explore shapes
by drawing the outer edges and then trying different
modifiers. Follow this up by clicking them in place by the
other test shapes.
Working method
Making these shapes can be difficult so it’s a good
idea to come up with a nondestructive workflow. Build
up these shapes flat, but make sure that you use clean
topology with enough support loops. By doing this,
you can bend your shape into different directions and
build it up steadily. The cool thing about 3ds Max is
that you can build on top of these bend modifiers, so
first bend your model in one direction. Add some
more details like circular shapes because bending
these is never a good idea. Then bend it again in the
other axis.
I think it’s a nice idea to
keep the same material
on objects with the same
Set up floaters
Let’s finish it off with some small details. Floaters are
super quick and useful to use on your high-poly model
and they are essentially just small details you place on
top of your asset to create more depth. It’s an easier
method that having to cut these shapes out. Once you
have started on your lowpoly, you can just remove
these immediately and get them baked in. To blend
them in nicely, just add an extra polygon strip at the
edge of the floater. They are a great optical illusion
until you angle them enough to break the appearance.
Break up hard shapes Being a motorcycle,
everything on this model is industrial and has a hard
surface. You might want to break this up a bit – for example,
we could add some bags on the back or cover some of the
pipes with Titanium Heat Wrap. But it doesn’t always have
to be cloth based; on this model, you can add some extra
cables with elegant shapes and a double leather seat with a
part to sit on and a part to support your back.
Make it believable Now that we have all the big
objects in place, the final product is starting to come
together – but it can’t hurt to add some extra support
structures so it feels extra heavy and believable. Don’t forget
that this bike is completely held up by the tyre. Polish it up
by adding some extra holding bars on the front and back of
the vehicle. Don’t make those things too detailed – just make
a metal material with some bolts. Personally, I think it’s a
nice idea to keep the same material on objects with the
same function. Just make sure all these extra pieces don’t
intersect with animations of the driver.
Pick KeyShot I always prefer to use KeyShot when it comes to rendering – it is quite
expensive but it comes with a gigantic material library. It also supports making HDR
maps. You can even stamp PNGs on your models, which is great for stickers and our dirt
marks. The newest version even comes with transparent materials like cloudy plastic and it’s
able to handle billions of polygons without any problems. The only negative thing about it is
that it does not support the edge shader from Modo and you need to subdivide your model
before exporting. Luckily, KeyShot provides a BIP exporter for 3ds Max models, which makes
exporting models quick and super easy.
Zoom out
It never hurts to zoom out once in a while – you can be so
focused on a small piece that you forget the big picture.
We don’t want to spend hours and hours on something
that will never be noticeable. Plus, it never hurts to
double-check the silhouette of shapes from a distance.
Sometimes I over-detail things a bit but once I zoom out, I
see that most of them just became noise factors. Don’t
forget that these assets are for games – most people
won’t even spend time looking at your singular asset.
Avoid tunnel vision
When you work on something for a long time, you start
missing mistakes. Try to make a test render twice a week
and paint over your model, just with arrows and words, or
blend or paint some pictures into it. If you can, make this
the first thing you do when you start each day so that you
can find mistakes you looked over. Once you are done
with these, you will be pumped to start working on new
areas. If you want to make it easier on yourself, ask some
friends to give you honest feedback.
HDR set-up
Post-production in Photoshop Post-production is my favourite and it’s a step you
should never skip because you want your hard work to reach its fullest potential. In
the image above, you can see the difference before and after. I personally always render my
passes separately: Metal, Global Illumination, AO, ID and a combined one. This way, I get
more control for tweaking materials and blending them. You can also add colours based on
cavities and ambient occlusion. Use your ID to mask out different pieces. Try to keep your
background simple – the focus should be the model and nothing else.
64 All tutorial files can be downloaded from:
Lighting is key when it comes to rendering so you want to
make sure that you have full control of it. Luckily, you are
able to do this with the HDR editor in KeyShot Pro. Try to
have a lot of contrast, eg an extreme dark background but
with some really bright soft lights and finish it off with an
accent colour. Another nice touch is adding a ground
colour to the map as that way you get some nice bounce
light from below and it makes it feel like there is a living
environment around your model.
The 45th International Conference & Exhibition on
Computer Graphics & Interactive Techniques
Discover the most cutting-edge hardware and software in visual effects. Get a glimpse
at what the near future of your work will look like, and walk away inspired to innovate.
sciVfx Light&Material
One, 2018
Master lighting
and materials
Observe how light interacts with matter, and create advanced
shading effects in Houdini and Clarisse
Houdini, Clarisse
Learn how to
• Observe light and
matter interaction
• Add extra details to
the shading
• Use dynamics and other
tricks to achieve richer
shading effects
• Do experiments with
real-world objects to
understand how things work
This study scene was created
for Isotropix to use as a
demonstration of Clarisse. It
is based on scanned objects,
while the background is my
photogrammetry and the
statues are from
t is a trend in VFX and other CG fields that artists use
‘uber’ shaders that deal with almost all of the usual
material effects we see in daily life. However, there are
some kind of phenomena that can’t be rendered with these
shaders. Additionally, working with these is more like being
a DJ – you mix together the different shading behaviours,
which were originally created by shader writers. As the
best DJs are also musicians, we can get better results if we
learn more about how light interacts with matter and also
re-create these shading effects from scratch, even
inventing new ways to add an extra layer of realism or a
unique look to avoid results that are too standardised.
Observe directly We can achieve the most detailed
and adequate shading effects if we do research,
which includes direct and indirect observations. Directly, we
can observe different optical phenomena – one of the best
times is during cooking and food preparation as a very wide
range of material effects is there simultaneously. Just
observe how flour or grounded seed gets darker but more
specular with water, or how the egg white converts from a
dielectric-like material to subsurface scattering one. Another
advantage is that you can touch and smell these materials,
which is in contrast with the intangible nature of CG.
• Tutorial screenshots
• Lighting reference video
The shape of light We can use any display – a
monitor or smartphone – as an arbitrary shaped
and textured light source to study how the shape of it can
affect the character of the lighting, especially the penumbra
area of the shadows. This is similar to the lighting in Gravity
and many other modern movies, just on a smaller scale. We
can compare simple shapes like square and circle. You can
imagine yourself as an ant walking through this blurry edge.
The area of a square shape decays linearly from your
viewpoint but the area decay of a circle – thus the incoming
light amount – is sinusoidal in the function of the distance.
Real-time shaders In Clarisse we get the final
shading in the 3D viewport because it is also
raytraced. However, the viewport in recent Houdini versions
can show the shading and lighting effects with
displacements very well, especially if we use the Principled
shader. For virtual cinematography work, it’s important to
see our scene with a decent level of lighting and shading but
with smooth playback and interaction. In other cases it’s
recommended to switch back to constant shading with
wireframe in the viewport. This makes the scene more
abstract and your eyes won’t adapt to the game-like visuals,
which is an advantage for the rendering decisions later.
The default sRGB is just a
gamma encoding process for
the monitor. In a nutshell, ACES
digests the wide gamut scene
linear values and renders the
image perceptually correctly on
your limited display
Colour management – ACES If you use the default
colour management settings in your 3D software, it
is time to set up an advanced one like ACES. It’s easy and
there is a short how-to on The
default sRGB is just a gamma encoding process for the
monitor. In a nutshell, ACES digests the wide gamut scene
linear values and renders the image perceptually correctly
on your limited gamut and dynamic range monitor, even if it
is DCI capable. At the heart, there is a 3D LUT-based
fermentation process, which is a standardised and
neutralised digital implementation of the photochemical
pipeline but in linear colour space. It nicely avoids the
clamping at value 1 and works with this wider dynamic range
while also being better for look-dev. You still need to colour
grade your image but it’s a good base to start with.
Additionally, it is a VFX and DI industry standard.
Distance and subsurface scattering With
subsurface scattering materials, the distance that
we are viewing from is very important. For instance, if we
observe a snow-covered area within just few metres, the
subsurface scattering effect is significant. However, if we are
looking at a snow-covered mountain from above, it looks
more like white paint. The local depth of the scattering and
absorption remains the same, so from close distances it can
be a couple of pixels in our image, but this length can be
negligible from farther distances. As a result, it’s better to
use a proper diffuse lobe than to waste our resources on
subsurface scattering.
Disadvantages of bump mapping Bump or normal
mapping has some major disadvantages when it’s
compared to displacement mapping, like the lack of parallax
effects, self shadowing and proper modulation of subsurface
scattering. So we can get more realistic effects in most
cases if we use displaced geometry rather than bump
mapping. It can be a prebaked one but there might be too
much memory consumption. Houdini’s Mantra dices the
displaced geometry on the fly and we can set the quality
settings of this on the Rendering/Dicing subtab of the
output driver panel.
Solid object realism The standard surface-based
3D modelling can limit the achievable details and
realism of our image because the real objects are solid and
we can see the inner structure of them to some extent. This
can be quite a considerable difference when the viewpoint is
close for similar reasons to subsurface scattering. So if we
have enough rendering resources, we can build the physical
structure of the material in Houdini with procedural
modelling and shading techniques – like this volume
procedural, or even with simulation nodes involved. This is a
recent trend in VFX, though.
Paint with motion blur There is a technique in
Clarisse that allows us to render long-exposure
images, similar to the bulb mode of cameras. We can crush
the multi-frame animation of the objects, which can be
alembic, into the time range of the motion blur by just
simply setting back the Frames Per Second value of the time
slider to a smaller one. In this example, we baked the motion
vector AOV of a moving sphere, which has a noise clip map,
then used the result for the shading network of the car paint.
The motion vector drives the anisotropy direction and the
alpha defines the scratched area. With this
pseudo-simulation technique, we are able to create
procedural shading for a destruction scene, even if this
comes from other 3D software without any extra property
like collision texture.
Geometry-based shading We can go further with
this setup and use geometry-based procedural
shading. While these kinds of damaged surfaces usually
have scratches of a wide range of size, it’s enough to just use
anisotropy for the smaller ones but the larger ones are
individually visible. Here, we created a separate context
where we generated a geometry-based texture. The
scratches were scattered thin cylinders and their direction
was driven by the motion vector in the previous step. We
can also use the paint tools in Clarisse to scratch the surface
directly, then scatter and orient them by hand.
Paint with dynamics It’s relatively easy to set up a
dynamics-based texture generator pipeline in
Houdini, even for non-FX artists. We can use the shelf tools
for this purpose, which allows us to achieve the results
quickly and we don’t need very detailed and sophisticated
simulation if we just want to generate textures. In this
example, we polluted the surface with pyro solver-based
smoke simulation to get a realistic pattern of soot residue.
With these techniques, the opportunities are endless.
Sound blur There are VFX shots where we feel that
something is missing but consciously everything is
there. If we investigate further, there are subtle but
sometimes quite obvious effects in reality that can affect the
appearance of objects. One of them is the vibration of the
surfaces or even the visible sound waves. Just think about
the mirror of a gasoline car and how much it can blur its
reflections when the engine is running, but the entire
surface of the car can have clearly visible vibrational blur.
We can add these effects in Clarisse or Houdini with a
brute-force method – subframe vibrational animation of the
transform parameters. The most procedural way is to use
motion FX in Houdini and you can even use a song in MP3
format as a wave input.
With this
pseudo-simulation technique,
we are able to create
procedural shading for a
destruction scene, even if it
comes from 3D
software without any
extra property
If we use it with the
Indirect Global Photon
Map setting, it also
computes final
gathering-like effects that
restore the detail loss
Patina, tarnish, corrosion The patina of bronze and
other corrosive materials depends on many things,
which we can mimic with shading nodes. This mainly
remains in the more occluded areas of the surfaces and that
is why we use occlusion nodes. The parts that are more
exposed to wind, touch and precipitation are rather shiny.
The latter creates more reason for the upfacing surfaces to
remain shiny, so we use the slope in the gradient node as an
input for other nodes. The clear bronze areas look like they
have a thin and shiny but dark coating, and the transitions
have a greenish tint and less specular lobe – that’s why we
use multiple reflection nodes.
Unlit photogrammetry textures Even if we take
photographs in overcast lighting conditions for
photogrammetry textures, the lighting of the original
location is still baked in our texture. One way is to use the
linearised phototexture directly as an emission texture
without any raytracing on itself, and render it as sky light
diffuse AOV. However, we can get a proper diffuse albedo
texture if we remove the light-shadow effects. In Clarisse,
we can create a separate context and set up a lighting
environment that is somewhat similar to the original scene
– a homogeneous dome is usually enough and then we bake
this lighting as an image. Now we can use this with a divide
node on top of the original phototexture. This isn’t fully
correct but it’s better than using the original photo.
Precompute lighting If we have limited rendering
power, it’s recommended to bake the static part of
the shading and lighting of our scene – usually the direct and
indirect diffuse reflections. While we sometimes need sharp
results for the direct diffuse effects, we can use baking just
for the indirect diffuse ones. We can create a separate 3D
image layer in Clarisse for each of the involved objects,
switch on the UV Bake option and use AOVs to separate
diffuse effects. In Houdini, the Indirect Light node is a
photon map generator. If we use it with the Indirect Global
Photon Map setting, it also computes final gathering-like
effects that restore the detail loss, which is the basic
disadvantage of photon maps.
LPEs With Light Path Expressions you can separate
all the steps of the raytracing computations into
different AOVs and comp them manually. This gives you not
just more control, but also the interactivity of the refinement
of the colours and values during the comp is much better
than with IPR and doesn’t break the rules of the physical
approach. You can create AOV groups for each light,
separate diffuse events from specular ones, put each global
illumination bounce in different AOVs and so on. While the
renderer just writes out the results of the computation steps,
separating lights into different layers, for example, does not
need the recomputation of the image for each light.
70 All tutorial files can be downloaded from:
Discover another of our great bookazines
From science and history to technology and crafts, there
are dozens of Future bookazines to suit all tastes
Expert advice from industry professionals, taking you from concept to completion
All tutorial files can be downloaded from:
Jody is a senior environment
artist with ten years of games
industry experience who has
recently taken the plunge in
to freelancing. Her credits
include Batman: Arkham
Knight, Batman: Arkham VR
and Dirty Bomb.
Create an underwater
scene in UE4
n this tutorial you will learn tips on how to create a
fantasy underwater scene in Unreal Engine 4 using
techniques from AAA game pipelines. You will also get a
better understanding of how to block out the necessary
components for a strong composition with an interesting
narrative. We will then look at different methods of adding
foliage including traditional modelling techniques and
adding SpeedTree foliage to the scene. In the final stages
we will explore lighting and add caustics and the final
polish, which will bring the whole scene together.
Block out the scene I wanted to show something
beautiful in the dragon and nest, and then draw the
eye to the boat and oil spill to make the contrasting emotion
of the scene really strong. I made use of the rule of thirds and
used the most saturated colours in the scene to make the
dragon the initial focus. I then used the leading lines of the
kelp, the dragon’s gaze and the fish to draw attention to the
boat so that the viewer discovers the story as they look at
the scene. Nailing this in the blockout is crucial to the
success of the image.
• Tutorial screenshots
• Making-of video
Shape the kelp The kelp foliage was key in the
composition of this scene and was created using
3ds Max as I needed very specific shapes. I began by
making one straight section of kelp that can tile and looks
interesting from all angles. I then created a spline, which
made up the shape of the kelp. I duplicated the mesh to
create the correct length and then used the path deform
modifier in Max to deform the kelp to fit the spline. I made
variants of the model by duplicating and changing the spline
to different shapes and by tweaking the deformed mesh to
give it more shape variation.
Add movement to the kelp When making the
leaf UVs, I laid out the base of the leaf at the bottom
of the texture and the tip of the leaves at the top. Now, by
using a vertical gradient to mask this, we can have the leaf
base static and the tips moving, which will give us some nice
leaf sway but allow us to keep the overall composition that
the kelp shape is allowing. To do this, multiply a sine node by
a gradient texture and plug this into the world position offset
of your material. Adding a multiplier means that you can
control the strength of the effect.
SpeedTree foliage The small grasses and algae
bushes are made using SpeedTree. I began by
creating a simple blades generator and tweaking the
properties to define the radius of the plant. By tweaking the
frequency and the length segments, I had precise control of
the polycount and optimisation. I enabled the wind before
saving the file and importing in to Unreal. Once in Unreal,
add a Wind Directional Source and your foliage will sway.
For the plants, I tweaked the wind animation properties in
SpeedTree to get a slow, swaying movement that looked as
if it belonged underwater with drifting currents.
Lighting and fog To achieve the underwater feel,
I used quite a dense Exponential height fog, setting
this to be volumetric to give the impression of the light
scattering through the water. The main directional light
mimics the Sun’s direction from above the surface and is a
very pale blue with the volumetric scattering intensity set to
1.2. I then added a moveable skylight with a deeper blue and
higher volumetric intensity. Further point and spotlights were
used to make things pop, such as the rim lighting on the
dragon. I also added a slightly warmer light on the eggs to
give them an incandescent glow. A green volumetric light
was used in the midground in order to mimic the change of
colour through the water going from pale aqua green to a
deeper, darker blue.
Caustics and final polish To give the scene
some final pop, I created a simple panning caustics
material. I created a water caustics texture by making some
tweaks to the Cells node in Substance Designer. I then
created a material inside Unreal Engine 4 with its material
domain set to Light Function, which panned this texture in
two slightly different directions. This is then added to the
light function slot inside my Unreal Engine 4 spotlight to
project some fake caustics around the scene. For the last
touches, I created some simple translucent materials with
dust clouds on textures and applied this to planes on the
seabed to show debris being kicked up from the movement.
All tutorial files can be downloaded from: 73
Design your first
prop sword
Sarah and Dhemerae make
up TheLaserGirls – 3D
printing educators,
evangelists and enthusiasts.
They are also bloggers at, a
space that fosters positive
teaching and learning around
3D technology.
• Tutorial screenshots
• SolidWorks files
s cosplayers, we have found the sword to be a
great first build. It is a classic in the armoury that
more innovative weapons build upon, and it has a
rich history that provides an exciting inspirational and
structural starting point for those looking to delve into
prop-making. Swords also make a great first model for
parametric solids modelling, for they have various planar
and organic surfaces that require the modeller to touch a
diverse number of tools, while providing a familiar base
that can be easily built on and customised.
With this particular style of modelling, establishing and
understanding the workflow is pertinent when starting out.
The software’s rigidity creates limitations that require you
to have a chess-player mindset to their workflow –
knowing the endgame before the game begins, and making
choices that will ripple into later parts of the build.
You will begin with a 2D sketch that you will add
relationships and dimensions to, locking in your shapes and
sizes for the piece. Next, you will use features and extrude,
cut, revolve and so on to make the 2D sketch a true 3D
form. From there, you can build more on your base and add
details like fillets and chamfers.
If this sounds like a lot, don’t fret – a powerful piece of
assistance you’ll get from SolidWorks is that it will alert
you or even prohibit you from performing actions that are
not mathematically sound. For beginners, this helps
provide added structure and context during the
overwhelming initial learning process.
In this tutorial, we will construct a to-scale, 3D-printable
Cadian sword from the Warhammer series. The blade and
handle will be modelled in separate part files that will then
be joined in an assembly. We will place an emphasis on
workflow during this process and walk you through why
we made certain decisions to build this piece the way we
did. There are several ways one could go about modelling
this piece, and each way would look a bit different. We
wanted to not only perform an efficient build, but also have
a balance of tools used without, so you can play with all
kinds of elements in the software.
Draw your reference sketch Open a new part
file, select a plane and begin using the Line tool to
draw out your sketch. This sketch sets the relative size and
scale of each component that will come after it. As per that
chess-player mindset, this is the time to set your units of
measurement – which can be toggled in the lower-right
corner and we will be using IPS for this – as well as create
and dimension any helpful construction lines or guides, and
establish baseline relationships between your sketch lines.
Do not make this sketch too detailed; you just need to
capture the overall shapes.
Extrude the blade With the 2D sketch complete,
we can then use features to convert the sketches to
3D through extruding, cutting, revolving and so forth. Each
feature has its own menu to toggle different elements of the
action. Be especially mindful of the direction of extrusion.
We chose a Mid-Plane extrusion that centres the model on
the Sketch Plane because this will make any mirroring later
on far easier to achieve. This is a great example of the
importance of preplanning your workflow.
Carve a more defined shape Give the blade a
smooth, tapered edge using the Extrude Cut and
Chamfer features. We chose to not create a fine knife-edge
due to potential chipping after printing. For the cut, we
created a profile sketch using the top face of the sword as
the sketch plane, and created a Through-All Extrude Cut,
followed with a chamfer along the sword edge for added
dimension. We recommend doing all your fine detail work
like chamfers and fillets last because they can easily
overcomplicate your build early in your workflow.
Add signature details You can create sketches
on planes or atop (ideally) flat faces of your model.
On the blade’s largest face, begin sketching the elements
that will be extruded out or cut into to add character to the
sword, then mirror about the plane set-up by the Mid-Plane
extrusion. Now would be a good time to review your history
on the left side of the screen and rename each step you have
taken to better keep track of your workflow. This will be
handy, especially if you want to scrub back to edit an earlier
part of the model.
Build the guard Save this file, open a new part
file and use the relationships and dimensions from
the original reference sketch to draw your handle’s guard.
Use a Center Rectangle sketch tool with the centre point
coincident with the origin point – this action directly affects
Step 7. Follow with an extrusion. The Cadian sword handle is
very simple compared to other designs but it is a great base
for almost any other handle you may make. For more
complex designs, identify the main primitive shapes that
create the form, build them, then add or carve out the detail.
Relationships and Mates
SolidWorks’ Relationships system could best be
described as the scaffolding of your model. Through
selecting at least two lines, arcs, or a combination of
sketch elements and establishing how they interact
together, you can define the sketch, which ‘locks’ it
into place. There are no ambiguities between the
elements. Mates work similarly to relationships in that
they both set parameters around how elements
interact. You can Mate most elements of parts and
planes together but the key is to make fewer Mates as
to not over-define the assembly. This is also the same
with Relationships.
Extrude grip, then pommel Sketch a circle to
the diameter outlined by the reference sketch on
the centre of the guard and extrude outward. For the
pommel, sketch on the bottom face of the grip, creating a
larger concentric circle to size, then extrude.
Construct the knuckle guard This is created
with the Sweep feature, which is made up of a
profile sketch and a path sketch. The first, the profile,
determines the shape of the knuckle guard. We used a
rectangle sketch tool to draw on the guard on the same
plane as the handle. Next, we will work on the path – this
sets the route the profile will be extruded along, and is
drawn on the plane that intersects the middle of the guard.
You could use an extrusion to create this but the Sweep
feature provides different customisations and, in some
cases, can create more organic-looking forms.
Add texture to the grip The Linear Pattern tool
can create this in the least amount of moves. Linear
Patterns require a feature to pattern, which will be a short,
concentric cylinder extruded around the handle. Next, create
the path on which the feature will travel, which is made with
a centre line. Make sure to dimension the length of your
extrusion accordingly so when you pattern down the handle,
it ends at the pommel. You could also create the handle
using the Revolution feature but we found this method to
make the most sense for this build.
Create the assembly The parts are done! Save
your handle and open a new assembly file. Begin by
using the Insert Component tool to add both the handle and
blade to the scene. If your parts are oriented incorrectly,
don’t fret – we will use Mates to snap them into place. Also,
make sure the assembly document is using the same units
of measurement as your parts or else, they may export at
the incorrect size.
You could use an
extrusion to create this
but the Sweep feature
provides different
customisations and,
in some cases, can
create more organiclooking forms
The parts not being edited
will become translucent and
you can better see how your
parts are interacting while
you’re editing. We used an
extruded rectangle off the
bottom of the blade and a
coinciding cut the same size
and shape of the handle
Use Mates We used three Coincident Mates in
this project – two of them use the sketch planes to
align and centre the blade in the middle of the handle, and
the other Mate butts the two parts together. First was a
Mate between the Front Plane of the handle part and the
Top Plane of the blade part. Next, a Mate between the top
face of the handle guard with the bottom-most face of the
blade is made, followed by the last Mate of the top plane of
the blade to the front plane of handle.
Create joinery Next we will build the joinery to help
secure the pieces together after printing. SolidWorks’
assemblies are great because you can edit individual parts
within the assembly file without needing to open them
separately (you can open them separately too, though). The
parts not being edited will become translucent and you can
better see how your parts are interacting while you’re
editing. We used an extruded rectangle off the bottom of the
blade and a coinciding cut the same size and shape of the
handle. Be mindful of the tolerances of the 3D printer you
are using and edit your joinery accordingly.
Export for 3D printing The majority of printers
take STL files. Go to File>Save As, and choose STL as
your file type. Before exporting, make sure to select the
Options button, which will open up a menu where you can
adjust the resolution of your STL file. SolidWorks likes to
export with the least amount of triangles, which could lead
to undesirable faceting on rounded surfaces. We normally
select the Fine option under Resolution and experiment from
there. Congratulations – you’re now ready for printing!
A note on printing props
Some tips for your projects:
1) Make sure to choose your printer and materials
wisely – really think about how you need the part to
function and then do your research on what materials
will best suit your parts.
2) If parts need to fit together, they should always be
printed in the same orientation, as the x, y and z
resolutions are never the same.
3) Test! Test small, significant sections of your parts
before printing the final versions.
4) Take the time to sand, prime and paint – while it is
time-consuming, it’s the best way for you to get the
results the you want.
5) Seal your painted parts with a protective varnish.
All tutorial files can be downloaded from: 77
Bohdan Kryvetskyy
Incredible 3D artists take us
behind their artwork
INSPIRATION I spend hours on the road and I wanted to convey this and
the feelings that come with it in an illustration. This idea came to me
during one of my long journeys. I vividly remember the reluctant moment
of leaving the warm Lviv LAZ-695 bus on a cold winter morning. I created
the environment and it echoes distant places in the post-Soviet countries.
Bohdan became familiar with 3D
graphics as a teenager and works as
a professional artist in architecture
and product visualisation
Software 3ds Max, Corona
Renderer, Photoshop
06:00 am, 2018
Special offer for readers in North America
from just $33.86*
eve y issue
piration for 3D
entthusiasts and
Order ottline
Online at
Terms and conditions *Via Quarterly Continuous Credit Card. Prices and savings are compared to buying full priced print issues. You will receive 13 issues in a year. You can
write to us or call us to cancel your subscription within 14 days of purchase. Payment is non-refundable after the 14 day cancellation period unless exceptional circumstances
apply. Your statutory rights are not affected. Prices correct at point of print and subject to change. Full details of the Direct Debit guarantee are available upon request. UK calls
terms and conditions please visit: 2IIHU HQGV Offer
30 June!
Industry experts put the latest workstations,
software & 3D printers through their paces
Chillblast Fusion
Render OC Lite
This mid-range workstation uses the latest Intel desktop chips to
offer a boost for content creators
ompared with some of the beasts that
have arrived at the 3D Artist towers, the
£2,500 price tag of the Chillblast Fusion
Render OC Lite P4000 Professional 3D Editing
Workstation is relatively modest for a 3D
system, but it’s still enough to cover a hardware
configuration that will deliver blistering
performance. It has an Nvidia Quadro P4000
graphics card with 8GB of graphics memory,
32GB of DDR4 memory, a Corsair H100i
all-in-one water-cooling setup and an Intel Core
i7-8700K processor, part of the company’s most
recent Coffee Lake design, otherwise known as
its ‘eighth generation’.
For over a decade, Intel’s flagship mainstream
Core i7 processor has never offered more than
four cores, with the pricey Xeon, Extreme Edition
and Core i9 lines reserved for six or more cores.
But this year, thanks to reinvigorated
competition from AMD’s affordable eight-core
Ryzen processors, Intel has raised the core count
of its Core i7 chips, squeezing six of them into
the Core i7-8700K – a welcome move that is
certain to benefit content creators who use
multi-threaded software applications.
The Core i7-8700K launched at the tail end of
last year and is built on the same 14nm process
as its predecessor, the 7700K. As well as the
additional two cores, the 8700K has a maximum
Turbo Frequency of 4.7GHz, the highest in any
off-the-shelf processor Intel has made. This
means some serious graphics-processing grunt.
The fast clock speed provides extra headroom to
the graphics card and therefore better frame
rates in games engines such as Unity and Unreal
Engine, while the extra cores help with
multi-threaded software such as Photoshop,
Max, Premiere and After Effects. It’s a nifty
configuration for anyone producing 3D.
Chillblast has squeezed all this kit, along with
an Asus STRIX Z370-F gaming motherboard,
650W Corsair modular power supply, 3TB hard
disk and 250GB Samsung 960 EVO SSD into an
Aerocool Quartz case, notable for its use of a
tempered glass side panel. There is no internal
lighting kit, so unfortunately the use of heavy
glass is a wasted opportunity. Internally, there’s
plenty of space for airflow, with three additional
mounting bays for 3.5-inch hard disks, three
intake fans and plenty of slots for PCI cards.
As expected, in tests we recorded a linear 50
per cent performance improvement, offered by
an additional two cores, over the previous
generation’s quad-core processors. The
Cinebench CPU score was 1,387 and 3ds Max
HDTV Underwater rendering took eight minutes
20 seconds. Notably, in these multi-threaded
tests, AMD’s eight core Ryzen processor wins.
But where this workstation shines is in 3D
accelerated tests that use the graphics card. The
4.7GHz clock speed is like attaching a rocket
booster to the P4000, giving it a real edge over
Ryzen. The Cinebench OpenGL test result was
207 points, while the SPECViewPerf benchmark
results weren’t far from the record-breaking
figures obtained from last issue’s Renda rig,
which costs almost twice as much, and the
Chillblast Fusion Render OC Lite even beat
systems from last year equipped with beefier
P5000 cards.
Put it all together and it starts to look like
great value and all-round performance. Notably,
opting for Intel means this system costs more
than Chillblast’s Ryzen workstation we reviewed
some months ago – £2,000 versus £2,500.
That £500 difference may put this system out of
reach to some but while Ryzen still doesn’t
disappoint, and will surely get better with its
launch of its 12nm second generation, right now
we’d try to stomp up the difference to go with
Intel on the mainstream desktop workstations.
Orestis Bastonis
MAIN Our results show Intel offers better graphicsaccelerating grunt, but AMD has the upper hand in
multi-threaded tests
BOTTOM LEFT The use of a case with tempered glass
seems unnecessary, adding weight and fragility
BOTTOM MIDDLE Specialist retailers such as Chillblast
go to great lengths to offer a tidy internal PC build
BOTTOM RIGHT Chillblast has worked hard to keep the
price reasonable, with a modest power supply capacity, a
slim amount of storage and an AIO water cooling kit
A welcome move
that is certain to benefit
content creators who
use multi-threaded
software applications
Essential info
Hard drive
Intel Core i7 8700K Processor
32GB DDR4 memory
250GB Samsung 960 EVO SSD
Nvidia Quadro P4000 8GB
Aerocool Quartz Pro Glass case
Value for money
It lacks some of the bells and whistles of top-end rigs
but it’s a go-to choice for artists
LightWave 2018
NewTek returns with the first new version of LightWave that has
been released in three years
t midnight of new year 2018, NewTek
launched its much-anticipated
LightWave 2018 edition. It was much
anticipated because there were delays in its
launch, but also because it marks a considerable
change in the software.
The main changes in this new release of
LightWave were in the Layout application, which
is where models are staged, lit, surfaced and
rendered – so yes, the app is still split in two, to
the disappointment of many long-time users.
However, the renderer has been rebuilt from the
ground up and has many key changes.
Most importantly, the renderer is now a single
entity. It used to be that the preview VPR was
separate from the production renderer but VPR
is now the renderer, so viewport appearance and
final output will be identical. It is now a pure
brute force raytracing engine. Old scenes that
were fast on the hybrid scanline/raytracer can
now run a little slower until you optimise them,
but as a whole it is much faster.
Part of the reason for this redesign is that the
render engine is now physically based. All of the
lights and materials are based on modern
energy-conserving workflows. The lights are
measured in Lux units (lumen per m2) and have
multiple importance sampling built in, which
results in much cleaner specular hotspots and
shadows and allows light texturing.
There is also now the new catch-all default
material, Principled BSDF. Designed around
industry standards, users coming in from other
software will immediately understand how to
use it and it can create almost any kind of
material using industry-standard texture inputs
for PBR texture sets. Having said that, there are a
lot of new terms to get used to and it’s
somewhat of a paradigm shift for those who are
accustomed to the old system. There’s also a
new physically based volumetric system
including light scattering, volumetric primitives
and OpenVDB support that can achieve
beautiful results quite quickly.
One of the best new features in this release,
though, is the buffers system, which allows you
to break out all of the shading elements of a
render into their component buffers so that you
can tweak each element in post. It’s also useful
to work out exactly where noise in your render is
coming from, letting you know where to increase
The interface has received an overhaul, too.
Objects now have a modifier stack. Previously, it
was quite a chore to know what was deformed
and in what order. Now everything is in a single
stack that can be reordered, turned on and off,
and it’s possible to have multiple node editors on
a single object. The node editor is slowly being
worked throughout the whole application as
well. Almost everything has a node button,
allowing you to have incredible control.
Under the hood, Layout has a new geometry
core along with countless tweaks, user interface
changes and performance increases. One of the
most interesting things is the Python support
throughout, opening up the software extensively,
which some third-party developers have begun
to exploit in a big way.
While in many ways the interface looks the
same with similar settings, the far-reaching
changes behind the scenes have left some users
confused, especially because this version breaks
backward compatibility with the previous
versions (though, of course, export is possible).
Importing old assets and converting them to the
new materials is quite time-consuming. The
introduction of PBR and physically based lighting
also means that you have to deal with fireflies,
and the new noise reduction tool is very basic.
In conclusion, this is a big change in how the
software works with its excellent new features,
and it holds a lot of promise for the future.
However, how useful it will be to you depends on
if you value PBR rendering or need it in your
workflow. Overall, this is a substantial update
with a lot of very useful new features.
Andrew Comb
MAIN Complex metals, painted finishes and glass, all made
with only one material, and textures
BOTTOM LEFT The power of the node editor, for creating
complex mixed effects
BOTTOM MIDDLE A complex metal shader made by
mixing maps and anisotropy, which wasn’t possible before in
the original LightWave release
BOTTOM RIGHT With normal maps, realistic lighting and
metallic finishes, OpenGL matches renders much better
BELOW Previewing surfacing and lighting in the viewport
with textured lights
Windows 7 64-bit and up/ OS X Lion and up
Intel Core 2 / AMD Athlon™ II Processor
4GB minimum
NVIDIA GeForce 8400 series /
ATI X1600 minimum
Screen Resolution
1024 x 768 pixels minimum
Value for money
NewTek has made a huge step in the right direction
with a beautiful new renderer
w w
from all good
newsagents and
Print edition available at
Digital editions available on iOS and Android
Available on the following platforms
The inside guide to industry news,
VFX studios, expert opinions
and the 3D community
086 Community News
Vertex 2018
We’ve rounded up everything that
went on at our first-ever event!
088 Industry News
New animation studio launches, plus
REWIND’s Sol Rogers chairs a BAFTA
immersive entertainment group
090 Opinion
Ben Le Tourneau
& Scott Freeman
The Operators Creative cofounders
explain how they crafted quality
assets for a smartphone
Debuting earlier in
March 2018, Vertex
brought some of the
world’s brightest
minds in CG to
Olympia, London
092 Project Focus
VW – Born
MPC Advertising’s Fabian Frank
gives us lessons on getting the right
inspiration for a young ram
094 Industry Insider
Johannes Richter
The FX lead discusses some of the
techniques in making the Guardians of
the Galaxy Vol. 2 opening sequence
096 Social
Readers’ Gallery
To advertise in The Hub please contact Chris Mitchell on 01225 687832 or
The latest images created by the community
Missed the
Creative Assem tex?
masterclass at check
Then w
out an exclusiveist
tutorial for 3D A
on page 50!
The panel discussed the
ethics of digital humans
Vertex launches with
a star-studded line-up
Our first event was held at Olympia, London, with incredible talks from industry veteran Scott
Ross, Chaos Group Labs director Chris Nichols, ILM creative director Ben Morris and more!
ebuting earlier in March 2018, Vertex brought
some of the world’s brightest minds in CG to
Olympia, London. Our first year saw a range of
features across the hall, from keynote talks to portfolio
review sessions and an expo area where you could get up
close and personal with Maxon, Oculus Rifts and Wacom
as well as studios like Creative Assembly and Blue Zoo!
Starting off the day with his thought-provoking
presentation on digital humans, Chaos Group Labs
director Chris Nichols discussed some of the technology
behind digihumans including on the Wikihumans project.
Next up was Star Wars: The Force Awakens and The Last
Jedi VFX supervisor Ben Morris, who revealed some of
the techniques and thought
processes behind The Last Jedi.
Motion graphics artist Simon
Holmedal presented on his
stunningly abstract work for Us
by Night festival’s title sequences,
while director Hugo Guerra
walked us through his journey of
game cinematics with Mario +
Rabbids Kingdom Battle, The Walking Dead: March To War
and Heroes Arena. We also welcomed Allegorithmic
founder and CEO Dr Sébastien Deguy who spoke about
the tools changing artist’s workflows, like Substance,
followed by Digital Domain cofounder Scott Ross who
debated the role of VFX in the UK after Brexit. Finally,
senior concept and environment artist Anna Hollinrake
rounded out the day with her presentation on low-poly
styles and working with limitations.
While our main auditorium was filled with world-class
keynotes, our workshops were also stacked with eager
attendees and featured the cream of technical talks from
the likes of VR genius Glen Southern, modelling
supervisor Adam Dewhirst on
the epic digital humans pipeline
at The Mill, Saddington Baynes
CEO Chris Christodolou and
senior digital artist Marc
Shephard on the mass
customisation of visual imagery,
Axis Studios’ Sergio Caires on
lighting and shaders techniques,
While our main
auditorium was filled with
world-class keynotes, our
workshops were also stacked
with eager attendees and
featured the cream of
technical talks
Christopher Nichols presented
the first-ever keynote speech
and later hosted the panel
Vertex workshops were hosted
by artists from The Mill and
Creative Assembly
Bader Badruddin from Blue Zoo giving a masterclass on
cartoony CG animation, Mike Griggs and a fascinating
dive into 3D fundamentals, Creative Assembly’s Danny
Sweeney discussing character development for Total War:
Warhammer II and ending with Shayleen Hulbert breaking
down character art from concept to final render.
We ended the day with a panel that circled back to our
inaugural keynote on digihumans with Chris Nichols
hosting, and featuring freelance artist Maya Jermy, Axis
Studios CEO Richard Scott, Bournemouth University BA
(Hons) Visual Effects course leader Adam Redford and
brought back Scott Ross and Adam Dewhirst to the
roundtable. The group took on a more ethical debate
around the topic as well as discussing the good (and bad)
CG humans to have graced our screens in film.
Finally, we announced the winners of our first-ever Hall
of Fame, which is an annual award for outstanding
contribution to computer graphics. Our award this year
was 3D printed by Shapeways, the 3D community,
printing service and marketplace, and was awarded to Ed
Catmull, president of Pixar and Walt Disney Animation
Studios. As one of the most academically respected
researchers in the field, and for his contributions to the
development of RenderMan and techniques including
image compositing, motion blur, subdivision surfaces,
cloth simulation and rendering, texture mapping and the
z-buffer, we thought of no one better to receive Vertex’s
first Hall of Fame.
We had a blast with our premier event and we hope to
see you at the next one in 2019!
Get in touch…
Portfolio review sessions
featured the likes of
RARE and Milk VFX
Sol Rogers
heads up
The REWIND founder
will lead a group of
industry luminaries
from 15 companies
Locksmith is founded by
Sarah Smith, Julie
Lockhart and
Elisabeth Murdoch
Locksmith opens
new London
animation studio
The new facility is the first dedicated high-end
CG feature animation studio in the UK
Construction has been completed on Locksmith
Animation’s new studio, the first of its kind in the
UK. Based in London’s Primrose Hill, the studio
celebrated its opening on Monday 26 March.
Many of the industry’s leading lights were in
attendance to show their support.
Three floors high and a total of 5,000 square
feet, the new studio will house over 70 artists,
staff and crew. These occupants will comprise
the creative ‘front-end’ of Double Negative,
Blackmagic Design announces
DaVinci Resolve 15
Blackmagic has unveiled DaVinci Resolve
15, which comes with a new Fusion page
that fully integrates visual effects and
motion graphics. It is the first solution to
combine offline and online editing, colour
correction, audio post-production,
multiuser collaboration and now visual
effects in one software tool. A public beta
became available on 9 April 2018.
Locksmith’s London pipeline partners. This will
include a team of writers, directors, producers
and artists who will be engaged in creating
scripts, storyboards, design and editorial content
for the company.
“The studio gives a home to Locksmith’s
high-flying ambition to create a major new strand
of CG-animated movies here in London. It’s a
wonderful building – but it’s the artists it houses
that are our greatest assets. We hope it will add
to the appeal of London as a world-class
destination for the best animation talent working
today,” say Locksmith Animation’s co-CEOs
Sarah Smith and Julie Lockhart.
Production has already begun on Ron’s Gone
Wrong, a three-year undertaking, as part of a
multi-picture distribution deal with Twentieth
Century Fox. A slate of films is in the pipeline
with a new one due every 12 to 18 months.
The launch of this new studio comes at a time
when the UK is seeing an influx of international
productions across film and television. This is
largely thanks to attractive tax incentives and a
favourable exchange rate.
Sol Rogers, founder and CEO of
immersive content studio REWIND, has
been appointed chair of BAFTA’s new
Immersive Entertainment Advisory
Group, which will advise BAFTA on the
future of immersive entertainment.
The group will take over the work that
has already been done by last year’s
Virtual Reality Advisory Group and will
deliver its recommendations back to
BAFTA within the next 12 months. Each
of the group’s 15 members represents
the technological or creative sector in
some way and brings a high level of
expertise to the table.
Sol Rogers explains, “The focus will be
on educating and supporting BAFTA and
its members on all things immersive.
While the core team will be smaller than
before, there will be plenty of
opportunities for those interested and
involved currently in BAFTA film, games
and TV to participate in larger
consultations throughout the year."
Founder and CEO of REWIND, Sol Rogers
HAVE YOU HEARD? Gnomon’s Digital Arts Summer Camp returns to the US on 18-22 and 25-29 June 2018
Foundry announces Google Cloud partner for Athera
The partnership will deliver a new visual effects cloud offering
Foundry, one of the leaders in creative software
development, has announced that Google Cloud
Platform is the cloud service provider for Athera.
Known in beta as ‘Project Elara’, Athera is the
cloud-based technology that is expected to
transform visual effects workflows.
Craig Rodgerson, chief executive officer at
Foundry, adds, “The partnership with Google has
enabled us to build an industry-leading tool in cloud
technology. Google Cloud’s capabilities provide
Foundry with the infrastructure that we need to scale
up Athera and realise our goal of providing an
end-to-end cloud services solution for VFX studios
all around the world.”
Being hosted on Google Cloud Platform
means that Athera will be able to incorporate
Google’s unrivalled GPU, networking and
storage capabilities. Athera itself centralises
storage, creative tools and pipelines in one
location, giving users on-demand access to their
preferred tools.
Artists and studios can request a trial of Athera
now at
Clarisse integrates Unreal Engine 4.19 released
NVIDIA AI-driven Epic Games releases significantly updated version of Unreal Engine
Isotropix brings the NVIDIA OptiX
AI denoiser to its flagship product
Isotropix has released Clarisse 3.6 SP1, a new
update that introduces the NVIDIA OptiX AI
denoiser. Sébastien Guichou, Isotropix’s CTO
and cofounder, says, “Integrating NVIDIA OptiX
AI-driven denoiser technology in Clarisse was a
no-brainer. It allows Clarisse’s renderer to
converge faster to the final result. This is simply
ideal for artists: they can now even make more
creative decisions faster!”
Artists can also enjoy some improvements to
interactive feedback as well as the addition of a
special command line tool that is designed to
solve the issue of denoising renders on
render-farms that may not be entirely equipped
Software shorts
Flowbox v1.6
Flowbox’s update features
F-Splines curves, allowing
maximum flexibility. There is also
the Hard Edge Checker to make motion blur
easier and a Pause Viewer for real-time playback
when using multiple viewers. Users can now
crop their 4K plate to save memory and keep the
best performance. Flowbox is $37 a month for a
node-locked licence or $43 for a floating license.
Unreal Engine 4.19 enables users to step
inside the creative process, with tools
becoming almost transparent so that more
time can be spent on creating.
A new Live Link plugin allows artists and
creators to see what the finished product
will look like every step of the way. Further
improvements to Sequencer will give them
more control over scenes in real-time and
Animation tools now have pinnable
commands. Meanwhile, a new Dynamic
Resolution feature adjusts the resolution as
needed so users can achieve their desired
frame rates and make their worlds run
faster than ever before.
There have been many updates in the
areas of workflow and usability, as well as
128 changes submitted by the community
of Unreal Engine developers on GitHub. To
learn more, visit
Unreal Engine 4.19 is now available for download
Bringing you the lowdown on product updates and launches
Over 260 enhancements and fixes
come with this latest version.
SVOGI is now available on
consoles, enabling developers to cache SVOGI
on the disk and calculate GI offline. There are
also vast improvements to the terrain system.
Elsewhere, a new level file format future-proofs
collaborative editing and usage of version control
systems. CRYENGINE is free to use.
Photostory Premium VR
The latest entry into the
Photostory product family
introduces a an extensive array of
VR functions. 360° photos and videos can be
assembled to form professional virtual tours,
additional content can also be added and edited
with just a few clicks. Users can also create real
surround sound with their audio files. Photostory
Premium VR is available for $129.99.
DID YOU KNOW? FStorm teases new geometry sampling feature ‘GeoPattern’ on their Facebook group
Cofounders and integrated
directors at The
Operators Creative
Best practices for achieving
broadcast-quality digital
assets on a smartphone
We’re in the midst of a mobile revolution where there’s more content competing
for our attention online than ever before. The Operators Creative tells us how we
can craft digital assets that stand out from the crowd
ow much time do we spend on our mobile
phones? The answer is, unsurprisingly, rather a lot.
According to ComScore, we spend almost three
hours scrolling through some 178 metres of content every
day. That’s about 86 hours (or a five-kilometre fun run of
scrolling) every month.
Our daily feeds are bursting to the seams with
engaging videos, flashy banners, colourful animations
and more, all vying for our attention in an increasingly
crowded online environment. Indeed, even a single,
attention-grabbing Instagram post can make a larger
impact today than a $500,000 television
advert, so it’s no wonder brands continue
to flock to the digital world to speak
to their audiences. A diverse range
of CGI, cinemagraphs, stop
motion and simulation
techniques are being sorted
out to make mobile content
stand out.
While this is all well and
good, it presents something
of a challenge for creatives
and artists. Crafting content
for mobile demands all of the
pop and vivacity of a glossy
television commercial but it’s
delivered on a screen no larger
than the palm of your hand – and
severely limited by the format of the
channel it is hosted on. Oh, and you’ll
probably be working with a tenth of that $500,000
budget… if you’re incredibly lucky.
It may seem like asking the impossible but that’s
actually not the case. When faced with a brief that
requests big-screen engagement from a small-screen
box, artists must step outside of their comfort zones to
find ways of making it work. The key is integrated
production, from pre to post.
A few years back, filmmaker Martin Villeneuve spoke
at a TED Conference about his sci-fi film Mars et Avril.
Ruminating on creative limitation, he said, “People have a
tendency to see the problem rather than the final result. If
you treat the problems as possibilities, life will start to
dance with you in the most amazing ways.”
This is the crux of the matter – work around your
limitations. No budget to close the streets for an on-set
shoot? Film with GoPros and make that the aesthetic.
Need to do a zoom or tracking shot but don’t have the
specialist tech? 3D camera mapping can achieve the
same look. The approach doesn’t matter – what does
matter is that the story is told well and that the content
remains of a high quality.
To put it another way, you need to be multitalented.
One discipline or approach just isn’t going to cut it.
Big-screen content via small-screen channels is a hugely
difficult brief but it can be achieved by leveraging
different skillsets, taking varied approaches and using
out-of-the-box thinking to tackle familiar challenges in
completely new ways.
Might a GPU renderer be the better solution for a
particular project? Do you have a bank of
textures that will take the image in a
different direction? Can all of these
elements be combined into
something that will catch the
eye? Even when it’s being
portrayed in a tiny square,
vying for attention against
your competitors?
This thinking underpins
everything we do and
achieve at The Operators
Creative. Indeed, we even
named a way of working
after it: Premium Social. This
is how we make beautiful
content within the context of a
restricted format. When provided
with a brief from a client, we consider it
from all angles right from stage one. We think
about how to shoot for maximum flexibility when it
comes to post-production.
In a recent campaign for Heineken, we shot the
product in front of a green screen and drafted several CGI
environments. The imagery could then be tailored to the
current location of a mobile phone, personalised per user.
But we only needed to do one shoot.
We think about all of the channels that content might
appear in, from Instagram to Twitter, YouTube and
beyond. You never know when a client will throw those
curveballs and want a Snapchat story at the eleventh
hour. You need to be prepared.
So, how do you achieve broadcast quality on a
smartphone? There’s no trick solution. The answer is to
use everything you have learned over the years – every
discipline, every approach, every experience – and
combine it all to create something that can be delivered
efficiently but will also compete visually in one of the
most competitive advertising landscapes there is today.
w w w. p h o t o s h o p c r e a t i v e . c o . u k
from all good
‡ Striking imagery ‡ Step-by-step
step guides ‡ Essential tutorials
Print edition available at
Digital edition available for iOS and Android
VW – Born
Fabian Frank talks creating
animals in advertising
Company MPC
Location UK
Project description MPC
worked on Volkswagen’s
‘Born Confident’ campaign,
with a 60-second hero spot
directed by Nick Gordon at
Somesuch and VFX by MPC.
The ad opens with the birth of
a young ram, delivered by a
shepherd and his son on a
dark and rainy evening. This
self-assured little chap
exudes confidence from day
one, boldly stepping into the
world to wreak playful havoc;
leading his herd of sheep,
standing up to a sheep dog,
the farmer and an
intimidating bull. But when
the ram encounters the new
Volkswagen T-Roc, he
realises that he has finally
met his match.
Biography MPC is a
multi-award-winning creative
studio. As world leaders in
visual storytelling for over
two decades, with VFX
studios in nine cities globally,
MPC is renowned for adding
visual wonder and creative
expertise to the advertising,
music, contemporary art and
film industries, crafting work
across the full spectrum of
comms platforms and
immersive technologies. The
studio’s unique combination
of talent, technology and craft
delivers experiences that are
more distinctive and
memorable. Recent projects
include Channel 4’s ‘We’re
the Superhumans’, Samsung
‘Ostrich’ and EDEKA ‘2117’.
• Fabian Frank
VFX supervisor
ith such a bold and assured young ram in its
stride at the forefront of VW’s Born Confident
campaign, you’d be surprised to learn that the
references did not just lie in the animal kingdom. As Fabian
Frank, MPC VFX supervisor, explains, the research stage
used some similar animals but they also had to go further
afield. “A bull was a good example – mountain goats were
good, too. And sheep aren’t related necessarily but
anatomically a young ram and lamb are closer. They are
proud characters, as well. Horses were a good one for the
proud walk but ultimately we looked at strong personalities
like Muhammad Ali and Usain Bolt, and we studied them,
their posture and their behaviour.
“Some people, when they enter a room you can see who
they are: strong personalities, that’s just the way – you
haven’t even talked to them yet but it’s all through posture
and animation.
“You can’t copy things 1:1 but you can learn what makes
a walk confident, how somebody talks and what makes
that confident. When Muhammad Ali does an interview or
talks about his fights, he’s confident and very dominant but
he’s charming as well and likeable. So that was a good
reference for our ram.”
The most challenging part of working on the advert,
Frank explains, was when working on the character
development. “You only have 60 seconds to establish the
character, tell a story and wrap it all up. You have to be
really strong with what you want to tell with your message
and to get that all into one character, what the client
wanted, what the agency briefed, what they were after. I
think that was the trickiest part. It took a while to crack the
code – [to figure out] what kind of animation and posing
supported the character.”
The young ram joins a herd of incredible CG creatures
created by MPC Advertising in the last handful of years,
including a moonwalking pony for 3 and the award-winning
‘Buster the Boxer’ for John Lewis and Samsung ‘Ostrich’
campaigns. To bolster the offering of their stunning
creature work, MPC set up the Life group in 2017 as a
response to the growing client demand for high-quality
creature projects.
Of the tools used to create Born Confident, Frank says
that this was mostly handled in Maya and Houdini.
Grooming was created using MPC’s in-house proprietary
tool Furtility and texturing was a combination of ZBrush,
Mudbox, Mari and a little Substance, “We have a pipeline
to exchange between Maya and Houdini, so we can
exchange between packages which makes it good for
simulating fur in Houdini and adding dirt, effects like drops
and bringing that back into the Maya pipeline seamlessly.
Fabian Frank was speaking at The VFX Festival run
by Escape Studios. The company teaches students the art
of filmmaking and specifically VFX/animation. Find out
more about next year’s festival at
Ultimately we looked at strong
personalities like Muhammad Ali and
Usain Bolt, and we studied them,
their posture and their behaviour
MPC VFX supervisor Fabian Frank
explains how the studio ups their
game with every campaign
“The R&D depends on the task. With every project,
we review our technology and we are constantly in
contact with our software team. So we’re not saying
that the technology we had a year ago, that’s as good
as it gets. We can always take it further. If you look at
John Lewis’ ‘Buster the Boxer’ advert and the
Samsung ‘Ostrich’ that came right after, you can see
that we always try to take a project further.”
01 The VW Born Confident
campaign was a
collaboration between
MPC Advertising,
adam&eveDDB in
London and DDB
02 The Life group ensured
a high standard of
creature work, using
modelling, grooming
and animation
03 The advert was turned
around in four months,
and included previs, the
shoot itself and delivery
04 The team initially
consisted of a modeller,
groomer, rigger and
animator during the
previs stage but
ramped up to 15-20
after the shoot
05 MPC used its
proprietary fur tool
Furtility to achieve
stunning photorealistic
fur on the young ram
Job title FX lead
Location London, UK
Biography Johannes Richter
has had a varied and
illustrous career in the VFX
industry, with over ten years’
experience in FX. As FX lead
at Framestore, his recent
credits boast an array of VFX
heavy Hollywood
productions, including
Guardians of the Galaxy Vol. 2,
King Arthur: Legend of the
Sword, Avengers: Age of Ultron,
Poltergeist, Everest, Dracula
Untold and Robocop.
Portfolio highlights
• Guardians of the Galaxy
Vol. 2, 2017
• King Arthur: Legend of
the Sword, 2017
• Avengers: Age of Ultron, 2015
• The Martian, 2015
• Poltergeist, 2015
• Everest, 2015
• Dracula Untold, 2014
• Robocop, 2014
• Gravity, 2013
An insider’s look at Framestore’s
FX challenges on Guardians of
the Galaxy Vol. 2
he opening titles sequence to Guardians of the Galaxy
Vol. 2, James Gunn’s follow-up to his first entry in the
Marvel Cinematic Universe, has already become
somewhat of a legendary scene. It features Baby Groot
dancing to Electric Light Orchestra’s ‘Mr Blue Sky’ while
the rest of the Guardians take on a deadly pink alien
Abilisk. An insane amount of action – including laser blasts,
explosions and rainbow-coloured particles – plays out
behind the clueless but incredibly cute Baby Groot.
Framestore FX supervisor Johannes Richter helped
oversee the effects simulations seen in the sequence. As
fun as it is, it is a crucial scene in establishing how the
characters are now working, or not working, as a team.
“It needed to introduce the characters and how they still
relate to each other,” says Richter. “So there’s bickering
going on and there’s Drax being a bit slow, and then you
have Baby Groot as the central figure. It ties it all together
almost like a music video because he’s dancing, entirely
oblivious to what’s happening around him, and you have
Rocket kind of taking care of him by making sure that
things don’t fall on him.”
Gunn prepared Framestore for the challenge by filming
himself dancing in a very distinctive style, with animators
translating this to their digital Baby Groot. The animation
had to work in with what was essentially one continuous
camera take lasting a number of minutes, while simulated
pieces of shrapnel, explosions and even rainbow-coloured
‘matter wave’ vomit emanated from the Abilisk.
“It’s all supported by the effects of the shrapnel flying
about and the explosions. It was quite a challenge due to
the number of characters involved and things that had to
happen at the same time,” notes Richter.
Framestore worked on several other parts of Vol. 2, too,
such as the rather psychedelic ‘Space Chase’ sequence.
Here, Quill and Rocket dash through space among ribbons
of plasma and through a quantum asteroid field where
debris is able to perpetually shift in and out of existence.
Richter says one of the major challenges was simply
figuring out what a quantum asteroid is.
“The clients obviously want something fresh and new,
but it also needs to be something comprehensible for the
audience and not too convoluted. We didn’t want it to be,
like, ‘Well, it’s colourful and it goes boom everywhere, but I
don’t know where I am anymore, where the heroes are,
and what is actually happening?’
“We were trying to facilitate with the effects how the
narrative had to unfold - we’ve done a lot of effects work
but quite often effects are incidental, or are happening
because something else is happening,” explains Richter,
who suggests his aim as FX supervisor is to make the
effects as important and memorable as the principal
animation of the characters. “We want to deliver images
where it all fits together.”
How Framestore made the confetticoloured alien ‘matter waves’ for
Guardians of the Galaxy Vol. 2
The Guardians of the Galaxy Vol. 2 opener sees the
Abilisk release a good quantity of rainbow-like alien
projectile vomit, otherwise known as ‘matter waves’,
as it battles our heroes.
“It was supposed to be something that’s very
pretty and very dangerous at the same time,” says
Richter. “So the official brief was, ‘Make it like My
Little Pony but terrifying.’”
“What it essentially consists of is a big rainbow
cloud, which is a filament solve from Houdini that
has been volume-ised and we added some colours to
it, and then some spew-y confetti-esque stuff that
flies about and when that touches the floor, it
explodes,” he explains.
The clients obviously
want something fresh and
new, but it also needs to be
something comprehensible
for the audience and not
too convoluted
Johannes Richter, FX lead
All images copyright © 2017 Marvel Studios
01 Guardians of the Galaxy Vol. 2
opens on a shot of The
Sovereign, which is about to
be attacked by the
many-tentacled Abalisk
02 The opening sequence
needed to reiterate how the
heroes had been working
together since the events of
the first film
03 The studio won a VES Award
in 2018 for Outstanding
Virtual Cinematography in a
Photoreal Project for the
opening sequence
04 The entire set is reflective
and the camera doesn’t cut
once, meaning the team was
faced with working on 800
frame-long effects
05 Rocket was given a makeover
with Framestore’s proprietary
tool fcHairFilters, which
created a photorealistic finish
06 Framestore worked on 620
shots ranging from creature
work and spaceships to the
opening sequence and a
space chase through galaxies
Images of the month
Here are some of
our favourite
3D projects
submitted on
in the last month
01 Red Thunder
by Daniele Boldi Cotti
3DA username
Daniele Boldi Cotti says:
“In honour of Alfa Romeo Tipo 33,
which celebrated its anniversary
last year, this is a tribute to the best
Italian car manufacturer and its
sublime style.”
We say: This is a great example of a
shot of a car in motion. It’s got a
stunning vintage style photography
mood, and we can easily see
ourselves in the driver’s seat!
02 Duckey Duck
by Teodoru Badiu
3DA username
Teodoru Badiu says:
“This is a 3D illustration of a
character design that I created for a
designer toy idea. Duckey Duck is
based on my drawings and the final
result was put together using Modo
and Photoshop.”
We say: A brilliant concept that has
been realised to its full vibrant
potential, we love this piece by
Teodoru. We adore the mish-mash
of references and the shader work.
03 Berserker
by Eugene Gittsigrat
3DA username
Eugene Gittsigrat says:
“There was a great desire to do
something big and strong. I love
working on real animals.”
We say: We love some of the
detailing here, from the tiny water
droplets on the foliage below to the
fresh red blood around the bear’s
neck. This is ace.
04 Bodies
In Motion
Male Strap
by Vincent Chai
3DA username
Vincent Chai says:
“This is an anatomy study that I
did from Scott Eaton’s Bodies In
Motion. I started from a ZSphere,
then slowly built it up with form and
details. I rendered in ZBrush and
composited with After Effects.”
We say: Vincent’s study of this
gymnast perfectly showcases what
he’s learned with Scott Eaton’s
resources. The proportions are
perfectly illustrated and the
minimal treatment really shows off
the model. Great job!
Create your own gallery at
behind their artwork
CHALLENGE YOURSELF Chappie, from the film of the same name, is the
coolest CGI character that I’ve ever seen on the big screen and one day I
decided to challenge myself to model a bust of him. The main goals were
to make it very much like the original character and to keep the topology
as clear as possible as the same time.
Incredible 3D artists take us
Chappie Bust Fan Art, 2018
Software Maya, ZBrush,
Substance Painter, Redshift
John is a student at CG Spectrum
College of Art & Animation in
Australia. His main focus area is hard
surface modelling for cinematics
John Olofinskiy
Журналы и газеты
Размер файла
14 550 Кб
3D Artist, journal
Пожаловаться на содержимое документа