A hexagon shader in OSL, second edition

A while ago I made a simple OSL shader for hexagonal tilings. Prompted by some questions on BlenderArtists I decided to create a more versatile version that retained the color options but added the distance to the closest edge. That feature may help in creating crips edge patterns because the previously availble distance to the closest center creates rounded shapes that might be suitable for bee hives but not for hard edged stuff like floor tiles etc.

The noodle used to create the image is shown below (click to enlarge):

The color code has stayed the same except for the calculation of the distance to the edge. This might be a bit inefficient, but at least it's easy to read.

The additions are shown below, the full code is available on GitHub.

    // distance to nearest edge
    
    float x = mod(Coordinates[0]/3,1.0);
    float y = mod(Coordinates[1]/3,A2);
   
    #define N 18
    vector hc[N] = {
        vector(  0, -A2/3     ,0),
        vector(  0,     0     ,0),
        vector(  0,  A2/3     ,0),
        vector(  0,2*A2/3     ,0),
        vector(  0,  A2       ,0),
        vector(  0,4*A2/3     ,0),

        vector(0.5, -A2/3+A2/6,0),
        vector(0.5,     0+A2/6,0),
        vector(0.5,  A2/3+A2/6,0),
        vector(0.5,2*A2/3+A2/6,0),
        vector(0.5,  A2  +A2/6,0),
        vector(0.5,4*A2/3+A2/6,0),

        vector(1.0, -A2/3     ,0),
        vector(1.0,     0     ,0),
        vector(1.0,  A2/3     ,0),
        vector(1.0,2*A2/3     ,0),
        vector(1.0,  A2       ,0),
        vector(1.0,4*A2/3     ,0)
    };
    
    float d[N], t;
    
    for(int i=0; i < N; i++){
        float dx = x - hc[i][0];
        float dy = y - hc[i][1];
        d[i] = hypot(dx, dy); 
    }
    
    for(int j= N-1; j >= 0; j--){
        for(int i= 0; i < j; i++){
            if(d[i] > d[i+1]){ 
                SWAP(t, d[i], d[i+1]);
            }
        }
    }
    
    Center  = d[0];
    Edge    = d[1] - d[0];
    InEdge  = Edge < Width;
The approach we have taken is very simple: the hc enumerates all nearby hexagon centers. We then calculate all the distances to these points and sort them shortest to longest (yes with a bubble sort: with 18 elements it might just be faster to do it with a more efficient sorting algorithm at the cost of much more complex code so I don't bother).

The Edge is not realy the distance to the closest edge but the difference between the closest center and the next closest. Near the edge these values are more and more the same so Edge will approach zero. For convience we provide a comparison with some threshold also.

If you would like to know more about programming OSL you might be interested in my book "Open Shading Language for Blender". More on the availability of this book and a sample can be found on this page.

Inheritance and mixin classes vs. Blender operators

recently I was working on an addon that would offer functionality both in weight paint mode and in vertex paint mode. There were slight differences in functionality of course so I needed two operators but I wanted to keep most of the shared code in a single mixin class.

Sharing member functions was easy enough using a mixin class but for some reason for properties this didn't work. Any property I put in the mixin class was not accessible from the derived classes. What was happening here?

class mixin(bpy.types.Operator):
  foo = FloatProperty()
  def oink(self): pass

class operator1(mixin):
  def execute(self, context):
    self.oink()  # ok
    bar = self.foo  # error :-(

However, if I didn't derive the mixin class from bpy.types.Operator all was well ...
class mixin:
  foo = FloatProperty()
  def oink(self): pass

class operator1(bpy.types.Operator, mixin):
  def execute(self, context):
    self.oink()  # ok
    bar = self.foo  # ok too :-)

So what is happening here? Sure in the first scenario both the mixin and the derived class inherit from bpy.types.Operator but that shouldn't be a problem is it? after all, Python's inheritance model supports the diamond pattern.

The answer to this riddle is hidden in Blender's source code, quote:

/* the rules for using these base classes are not clear,
 * 'object' is of course not worth looking into and
 * existing subclasses of RNA would cause a lot more dictionary
 * looping then is needed (SomeOperator would scan Operator.__dict__)
 * which is harmless but not at all useful.
 *
 * So only scan base classes which are not subclasses if blender types.
 * This best fits having 'mix-in' classes for operators and render engines.
 */
So upon creating an instance of a class that is derived from bpy.types.Operator only the class itself and any base classes that are not derived from Blender types are searched for properties. This will reduce needless traversal of base classes that do not define properties themselves and avoid duplicate searches in cases where there is a diamond pattern but it sure is confusing.

It does make sense though: instantiation of operators should be as cheap as possible because instantion happens each time a screen area with the operator controls is redrawn! And this can add up: merely moving your mouse can result in tens of instantiations.

So the moral of this story? Sure it's awkward, especially while it's not well documented, but it is an understandable design choice. There is one serious drawback though; if you have the mixin and your operators in the same file you cannot use register_module() because that function tries to register all classes even the mixin class and that will fail.

conclusion

When using inheritance with bpy.types.Operator derived classes and you want to define properties in a mixin class make sure that:

  • the mixin class does not derive from bpy.types.Operator,
  • the final classes do derive from bpy.types.Operator (and from the mixin of course),
  • you don't use register_module() but use register_class()
  • for each operator separately (but do not register the mixin)

Blender addon: Assign random vertex colors to connected vertices

Prompted by a request in this BlenderArtists thread I created a small add-on that assigns random vertex colors to regions with connected vertices.

The image shows a single object consisting of several cubes. within each cube all vertices are connected and therefore get the same (but random) color.

Code availability

The code is available on GitHub. use File -> User preferences ... -> Addons -> Install from file to install it and don't forget to enable the check box. It will then be available in the Paint menu in Vertex Paint mode.

For discussion you might want to check here

Blender addon: visible vertices part III: bug fixes

A couple of weeks ago I presented a small addon that created a vertex group with weights that depend on the visibility of the vertex from the camera. This can be quite useful if you want to restrict particles to areas where they are actually visible, which will reduce render time and memory consumption and in a folow-up article I showed some new features. The addon was wel received but did contain a annoying bug which is fixed in this new version: vertices that were far enough behind the camera received weight as well; this is now fixed. I also disabled numerous print statements accidentally left in for debugging purposes.

Availability and usage

Version 0.0.3 of the add-on is available from GitHub (right click the link to download the script somewhere then install it from Blender with File -> User Preferences -> Addons -> Install from file. Don't forget to enable the check box and don;t forget to remove any previous version from you addon directory).

Once installed it's available in weight paint mode from the Weights menu.

The addon might also be discussed in this BlenderArtists thread.

Blender Addon: Weight paint vertices visible from the active camera, part II: distance weights and more

A couple of weeks ago I presented a small addon that created a vertex group with weights that depend on the visibility of the vertex from the camera. This can be quite useful if you want to restrict particles to areas where they are actually visible, which will reduce render time and memory consumption.

The addon did take into other objects in the scene that could block the view but there was room for improvement. The first new feature in this version is that the weight of the visible vertices diminishes with the distance from the camera. For a field of grass particles you often can do with less particles at a greater distance without destroying the apparent density so this helps in cutting down the number of particles even further.

The second addition is the option to add a margin around the camera frame. Some extra particles just outside the camera view might be needed to prevent a sparse edge and to allow shadows from outside the view.

The final extra is the addition of a vertex weight edit modifier. Probably one of the lesser known modifiers, this nifty tool allows us to tweak vertex weights after the are calculated. The addon itself for example decreases the weight linearly with distance but if you want it to fall off in a different way this can be done quite easily by tweaking the curve in the modifier.

All these new options are on by default and the addon even makes sure the vertex weight modifier is visible in the properties panel just to draw some attention to it.

Example

The image below shows the generated weights falling of with the distance to the camera and with some extra margin selected:

Availability and usage

Version 0.0.2 of the add-on is available from GitHub (right click the link to download the script somewhere then install it from Blender with File -> User Preferences -> Addons -> Install from file. Don't forget to enable the check box).

Once installed it's available in weight paint mode from the Weights menu.

The addon might also be discussed in this BlenderArtists thread.

Settling of particles in suspension with Blender and OSL

Playing with volumetrics I wanted to approximate the effect of the settling of suspended particles from a fluid, like what happens when you leave a glass of orange juice standing for too long. The idea is that there is some region near the bottom of the glass that has a dense mass of settled particles and that in the beginning the diminishing density towards to surface is not all that visible but get clearer with time.

Of course this could be implemented with math nodes but I find it far simpler to write it all down in OSL and condense the logic in a single script node. The result shown below is admittedly not the animation that will get nominated for the next Academy Award but does show nicely the slow settling I had in mind, including the effect that near the end the fluid seems to clear up quicker:

OSL code and node setup

The node is simple enough:

shader edgedecay(
    point Pos    = P,
    float Edge   = 0.1,
    float Power  = 1,
    float Density= 10,
    float Length = 1 - Edge,
    
    output float Fac=Density
){
    float h = Pos[2];
  
    if(h>Edge){
        float d = (h - Edge)/Length;
        if(Power > 0 && d < 1){
            Fac = Density * ( 1 - pow(d,Power));
        }else{
            Fac = 0;
        }
    }
}
The node setup for the videoclip is:

The Density input was animated from 2.7 to 3.5 in 100 frames, while at the same time the Power input was animated from 1.0 to 0.0.

The following graph shows the plot of the density profile with relevant parameters:

If you would like to know more about programming OSL you might be interested in my book "Open Shading Language for Blender". More on the availability of this book and a sample can be found on this page.

OSL support in mainstream renderers (V-Ray and Renderman)

Things are getting interesting now that more major players start supporting OSL. This week Pixar announced the support for OSL in Renderman and a few months ago Chaosgroup released a version of V-Ray with OSL integration.

Not all integrations are equal, for example the one in Renderman doesn't appear to support closures (bsdf's), but nevertheless, now there is a huge potential to share programmatical patterns and textures between V-Ray, Renderman and Blender. I am quite exited and can't wait to see how this will foster the reuse of shaders across platforms. To illustrate the portability: some of the shaders provided as examples on the V-Ray page work on Blender without a single change, while others need some tweaks, mainly because each renderer provides its own set of closures (for eaxmple, Blender does not have the phong() closure) and more importantly, the output model is not identical: the V-Ray shaders don't provide closure outputs, but simply assign to the Ci variable, something that on Blender has no effect.

If you would like to know more about programming OSL you might be interested in my book "Open Shading Language for Blender". More on the availability of this book and a sample can be found on this page.