Finalmente, dopo tre giorni di duro e rumoroso lavoro, si è conclusa la 17a edizione di Cartoomics e Ludica, i saloni italiani del gioco, videogioco e fumetto. Ovviamente PlaySys non poteva rimanere in disparte e già da Novembre 2009 ci siamo aggiudicati lo stand C20, in cui abbiamo esposto alcuni aggiornamenti dei nostri lavori. Il primo giorno ci siamo sentiti un po’ fuori dal coro, circondati da rivenditori di fumetti, manga , illustrazioni, stampe e altro materiale a tema, poi sabato e domenica siamo stati bombarati di domande tecniche e varie richieste pertinenti al nostro ambito di ricerca e sviluppo.
Beatrice e Luca
Ancora una volta ci è stato propoto di spostare la sede di PlaySys da Milano a Toronto e la cosa sta iniziando a diventare più seria di quanto previsto: “il mercato italiano è morto” è stato affermato da più di una persona, ed in effetti il panorama economico, burocratico e politico che stiamo vivendo nel notro Bel Paese non è dei migliori.
Siamo entrati in contatto con realtà commerciali e produttive interessate ad avvicinarsi alle nostre tecnologie o ai nostri risultati nella ricerca, e questo era il nostro target principale.
Abbiamo conosciuto professionisti che sono entrati in contatto con la nostra realtà…e sicuramente la cosa potrà portare interessanti vantaggi reciproci.
Mona e Luca
Per quanto riguarda i giovani talenti, molte persone si sono avvicinate, attratte dalle nostre realizzazioni 3D, ed hanno scoperto di cosa ci occupiamo. Molti sono rimasti sorpresi nel trovare una delle pochissime software house italiane che si occupa di ricerca in ambito videoludico. Ovviamente non abbiamo parlato solo di videogiochi e 3D, ma abbiamo mostrato i nostri prodotti editoriali, le nostre partecipazioni e collaborazioni in ambito musicale e altro ancora. Sempre a riguardo di giovani talenti, posso annunciare il ritorno a gonfie vele di D3vStudio, che si occuperà proprio di talent recruitment in ambito videoludico.
L’argomento che in assoluto ha avuto più successo, oltre allo sviluppo di videogames, è stata la notizia di Play-Institute, ossia l’istituto di formazione che avvieremo entro la fine del 2010. Si tratterà di un istituto di computer grafica, pre-produzione, programmazione e sound design, interamente orientato al mercato dei videogames. La differenza con gli altri istituti è sostanzialmente questa: noi offriremo conoscenze e insegneremo agli studenti come inserirsi praticamente nel mercato del lavoro. Play-Institute sarà una scuola per offrire istruzione, non per generare un guadagno ai suoi proprietari.
Martinelli e Crimì
Insomma, l’impressione sull’evento è stata più che positiva, dall’organizzazione alla partecipazione dei visitatori. Al nostro stad hanno partecipato alcuni professionisti, miei studenti presso Naba Milano e amici, generando così un dibattito continuo sulle nuove tecnologie e sul loro impiego in tutti i campi di cartoomics e Ludica.
Grazie a tutti quelli che hanno partecipato all’evento e che sono venuti a trovarci!!!
Molly *grafica rubata durante il disallestimento 🙁
Finnally!
These are the shots we did at the photographic studio, with the help of Sara and Pamela. I just told about the shooting session in another post, so here follow the pictures.
These images represent some publishing works we did at PlaySys*: Sprea’s Seven Magazine, Spera’s 3D World and my book for Hoepli, 3ds Max Design e Architettura.
*please don’t give importance to PlaySys webpage, it’s a mess and we are
working on it.
Ci siamo!
Dopo due settimane d’attesa ecco le foto scattate durante la sessione di shooting presso lo studio fotografico, con l’aiuto di Sara e Pamela. Ho già parlato di questa sessione in un post precedente.
Let me introduce a plugin I wrote for 3ds Max during our last big-as-size render. It is called P.S.R. (PlaySys Split Render) and is a great tool to render images with amazing size. The idea behing is really simple and partially used by some renderers like Mental Ray, final Render or V-ray: it splits the entire render in sub-pieces and then processes them as single images (subdivided again by the buckets of the renderer in use).
We had to render some images with a size of 7500×5000 for a total of 37.500.000 pixels (37,5 Megapixels) and this saves the entire production…next script I must write has to convert pixels in €! 🙂
HOW IT WORKS
The script’s user interface will detect automatically the renderer and the size you have to process. The first parameter to setup is the fraction of the image, in the picture it is 3, so it means 3X and 3Y so 9 sub-images.
Automatically, based on the render size and the fractions, the script will calculate the size of the sub-image:213×160.
By default I use a tolerance of 25 percent of the image. This tolerance is a little overlap of the sub-images, and is really important to obtain a seamless connection beetween sub-images.
Always depending on the fraction number, you’ll have a number of buttons called Matrix#, theese buttons can show you a preview of the rendered sub-image and, checking the Auto flag, when you press a Matrix# button, it fires up the renderer.
Like lot of my scripts, this can look a little unuseful, but if you have to manage an entire workflow with human-electronic resources, this can save time and money…expecially because avoid crash of big scenes.
…oh, and it integrates with Maxwell~Render 😉
I have yet a list of improvements to do:
First in the image area you see in the middle you will have the entire image, so I will render in a VFB and keep the full quality image in that area. You will have the possibility to reopen it, execute a new sub-image or save the image in a layered PSD Photoshop file;
I want to add a “Batch” command that executes all the renders in sequence;
I have to design a new User Interface that let the team use an N fraction, not only 1, 2^2 or 3^2 sub-images (yeah, I have to remove the 1^2 size ;))
I have to insert instructions and maybe I will append a rollout with a lite version of DMRB, my previous semi-realtime renderer;
We will use this tool internally at PlaySys, but I have not to exclude the idea to share it in future. Before of that, I will speak about it and other pipeline and workflow improvement strategies in my next book.
Yes, it’s official!
I’m writing a new book that deeply explains all the steps behind the making of a videogame. As technical figure, I will concentrate on technical aspects like pipeline and workflow management, plugins and software production, research and development impact, commercial 3D packages Vs custom one, game engine and other interesting subjects.
For thoose who doesn’t know, I just wrote a book about 3ds Max Design and Architecture, in wich I explain in 400 pages all the tricks behind offline rendering. HERE is the link of the pubblisher and HERE is a post about it.
We finished a new Autodesk‘s 3ds Max script that allow us to batch create textures, assign materials, hide objects, set properties, renders scene, add channels , and save in RPF file (plus .png).
it could appear unuseful if you have to render 3 or 4 times in a day, but for some projects we should need to create lot of renders (in the current one about 58.000) and it would become a mess without this script.
So, I’m glad to introduce our new friend: PLAYSYS | B.M.R.
The workflow is really simple:
– prepare your 3D models in the scene
– prepare the texture you’ll need to apply
– select the render settings
– press the “Create” button
the utility will do the remaining things, mixing 3ds Max possibilities with BMR’s functions. After a little you will find all your renders ready to be used. The utility will name the images with: $filename_$object_$texture_#progressivenumber.extension
As you can see the shader is quite complex, so start exploring this simple one and take time to understand how parameters works.
Here is a schematic description of the maps I used
The next one i a new scheme that can give better results. At the moment I’m still working on this structure. I have to prepare new textures for the Subdermal and Epidermal channels, but I will do them for a new model I’m working on.
Enjoy
Here is the material I used. You’ll have to substitute the two required textures with your own, btw, theese are only for displacement and bump. Feel free to use it and share.
DACS is the acronym for “D3v Anti Crash Saver” and is a little utility I wrote on 23th December 2005 (I always have a log inside my project folders ;)) and that prevents the common damage of scenes, caused by a crash of Autodesk 3ds Max.
Who uses this 3D package for knows for sure how frustrating it is to re-work on a scene because a crash deleted it. In fact, while 3ds Max crashes, the scene is damaged and the file will contain a little chunk of the entire work; this happens about 1 time on 10 crashes.
After this, when you try to reopen your file, 3ds Max says that it is corrupted and the data are completely lost.
There are 3 solutions to this problem:
– Do a frequent backup of your folder and files
– Increment your scene version with a progressive number
– Use my DACS 😉
The utility consists on a simple script of 3 Kb and it is loaded in the common way from the MAXScript menu. You can use the StartupScript folder too. In the previous image you can see how it appears when loaded.
As you can see, it automatically finds the file name and the path of your project. This script was written before that Autodesk introduces the new folder structure, similar to the one adopted in the old versions of Maya. This means that DACS saves the files in the same folder of the project (this is yet in my TODO list).
At this point you can start working safely: my script will inform you about the time from the latest save and when you press the button “Save Files” it will save your file plus two additional files, completely identical.
If your system crashes or the light goes away or the operative system gets stuck, DACS will maintain the integrity of the scenes, at least one of them 😉
Many thanks to Giorgia Foresta that spent time correcting my terrible english!
DMRB means D3v Multi Render Basket and is a little script I wrote about 3 years ago.
The idea behind comes from Lightwave’s fPrime, a quick semi-realtime renderer that is fully integrable in the 3D package and permits to render with almost anything doable inside it.
DMRB uses the default renderer to compute the image so, if you are working with mentalRay you can benefit of all it’s features such as Sub Surface Scattering, Final Gather, Global Illumination, Caustics, and so on.
Of course if you use other internal third party renderers (Scanline, VRay, FinalRender, Brazil) it still works fine!
In the video below you can see it with a very simple scene, that permits me to show it’s usefullness.
Note that it is very different than the ActiveShade because DMRB shows the entire result, without optimizations (btw, some can be enabled on the panel) and again, it works with all the major renderer.
Oggi ho trovato questa intervista che ho rilasciato i primi di Ottobre presso l’Italian Videogame Developer Conference, evento in cui ho partecipato come casa di sviluppo italiana e come sponsor.
Rileggendo quello che ho detto dello startup di PlaySys, mi sono venuti in mente diversi ricordi (alla faccia di chi sostiene che io non abbia un cuore), così ho deciso di pubblicare alcune immagini di PlaySys 1.0 🙂
Today I red the linked interview I had at the end of the 2009 during the Italian Videogame Developer Conference.
I spoke about PlaySys and the videogame development research we did, and was a really important fact for the italian game industry. The previous post spoke about an Italian trouble with business managemant, and is one of the causes behind the videogame undevelopment.
Btw, I decided to post some images of PlaySys 1.0, captured in the firsts weeks of the studio.
Alessia e Maurizio
Gianni
Luca
Pollice tra ALT e SPAZIO, medio su MAIUSC e anulare su CTRL…unica eccezione era Maurizio che teneva tutte le dita, di entrambe le mani, su Alessia 24/24 h
After some weeks of work with 2D graphics, I finnaly got back to 3D tasks.
I decided to spend some time playing with Arak and the Sub SurfaceShader offered by mental ray, inside at 3ds Max 2010. I tweaked the parameters for 1 hour and have to say that they are really quick to learn and use.
The model I used is enought simple and lite for rendering tests.
The first problem is that 3ds Max guide has no section that covers theese parameters: if you open both the official 3ds max guide or the deeper mental Ray’s one, you will not find lot of informations out of the theory behind SSS and how it is simulated/computed.
I try to explain something here, maybe someone else will find theese tips useful.
The first thing to do is to understand a very important fact: world units and scale importance.
Sub Surface Scattering renders the surfaces, considering the quantity of light that is absorbed and scattered trought the model.
If you have a thin wall, let’s say 2 millimeters thickness, a strong light setup and a scatter propagation on 1 meter, for sure you’ll not obtain what you expect.
Now let’s apply a Sub Surface Scatter shader (you must enable mental Ray if you changed the renderer for production purpouses). The one I selected is the simplest SSS shader offered by mental Ray, and we must understand how it works before proceeding.
When working with SSS Shaders you have to think at the color of the unlit surfaces, the color of the enlight areas and the specular, exactly as a traditional shader PLUS a color of the mesh penetrated by light. Now let’s see what the shader offers to us in the Diffuse Sub Surface Scattering rollout.
The first important parameter is the numeric value called “Unscattered Diffuse Weight”. This parameter determines the mix of the traditional shaded areas and the scattered one. Imagine it as a sort of blending beetween two different materials, one simple and one with light penetration capabilities.
For better understand this value, try to set it at 1 and render and then set it to’ 0 and render again.
Depending on your light setup, you will have a render similar to this: the first with value = 1 has a strong value of the Unscattered Diffuse Color while the second with the value = 0 has a null Unscattered Diffuse Color and a full Scattered Color.
Watch the Unscattered Diffuse Color in the parameter just up, that’s the color of the first render.
Now that we know what is and how to change the Unscattered Color we need to understand what is the Scattered Color.
In this image I set to 0 the Unscattered Diffuse Weight and let the shader to compute the Scattered Color, based on the two colors available (Front and Back) plus their weight. Look at the image descriptions, colors and values and you’ll get it, it’s really simple how it works!
Again, give a look at my values. I changed a little the front and Back color’s weights and changed the radius+depth too. Consider that my model has a radius of 10 centimeters. At the beginning of this article I wrote about the importance of the scene scale, now you know why.
Note that you can diminish the Unscattered Weight changing the color instead of the value. For example, if you set the color to pure black 0.0.0 and have a weight of 100, the result is the same as a random color plus a Weight of 0.
Generally I use this trick in my diffuse color in the standard materials, and here in the SSS the standard diffuse is the Unscattered Diffuse Color. As you can see 3ds Max lets you use a numeric value or a flat color by default, but you can change it with a procedural or textured one.
In this case I used the Falloff shader on the Diffuse Color to obtain a Maxwell style material. Generally I work mostly with the Next Limit’s renderer and I really like how it treats shaders.
I simply put a dark brown unscattered color for perpendicular camera rays and a medium grey/blue for the parallel rays. I changed the transition value, adding a Bezier key and here we are:
Not bad, the mesh now has a big rim light effect that detach it from the background. Remember that using low values is always preferable because you’ll obtain a perceptible effect and avoid to make it too much synthetic.
Now our teapot looks really interesting respect the beginning, but you can notice that it has no specular component. This is due to my previous tweak at the Specular Color inside the Specular Reflection rollout. I changed it to pure black, that means no specular values and no reflections. Now is time to change the color and the Shininess value.
In the render above you can see the specular color and it’s roughtness.
We are ready for our final render but before we need to change the quality values at the top of the shader. Set the lightmap value at 100% of the render size (the Light map of the Sub Surface Scattering effect) and the computation sample.
Lower values = low resolution but quick preview
Higher values = high quality but long render time
Enlarge this image to see the difference with the previous. Maybe I exagerated with parameters, this is only an example and was done because the scene is really simple and quick to render, but remember that in production timing theese parameters must be precisely calibrated, expecially in animations.
This is all for this simple mesh and article. Now I apply the material to my mesh and render again.
The result is really important: if you change the light setup, the scene or simply the mesh, the result can change drastically. I suggest always to proceed step by step in scene construction and in parameter tweaking.
This is not a professional result so I will have to work on it a little more. The following video is obtained working with a more complicated SSS shader that considers displacement too.