Playing with the vertex shader
The main goal of a WebGL program, or any 3D graphics library, is to take a set of points in 3D space, manipulate them in various ways and draw them to the screen as a two-dimensional image, very quickly. If you do this enough times per-second, you get a moving image that looks three-dimensional enough to fool a human brain.
My previous post focused on the fragment shader, which is responsible for taking the triangle data and drawing it, pixel-by-pixel to a flat canvas. The effect in the below demo, meanwhile, mostly makes use of the vertex shader. The vertex shader runs for each vertex in the input data, and determines its ultimate position. We can use this to make a sphere weird:
In three.js, a basic vertex shader looks like this:
varying vec3 vNormal;
varying vec3 vPosition;
varying vec2 vUvs;
void main() {
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
vNormal = (modelMatrix * vec4(normal, 0.0)).xyz;
vPosition = (modelMatrix * vec4(position, 1.0)).xyz;
vUvs = uv;
}
This shader performs two jobs. First, it tells WebGL the final outputted value of the vertex (by setting the value of gl_Position
). Then it defines some varying
variables which will be passed to the fragment shader to help it perform its calculations; in this case, the vertex’s normal, a vector representing its position, and its uv
coordinates. Because the fragment shader is run for every pixel of a triangle, not just its vertices, the varying
that the fragment shader sees will be an interpolation of the value passed by the three vertices that make up the triangle.
Before they’re set, this shader uses matrices provided by three.js to transform the original vertex. This is because the vertices provided by the mesh will be defined in terms of that mesh’s local coordinates. Multiplying the vertex vector by the modelViewMatrix
will transform it to be in terms of the global coordinates of the scene. Multiplying it again by the projectionMatrix
gives the vertex’s position on the canvas as defined by whatever camera we have set up in three.js
, allowing us to see the model in proper perspective.
Beyond that, though, we can add our own manipulations. We have to first assign the position
vector to a new variable as it’s read-only. We then pass our new vector to gl_Position
as before, first modifying it somehow. We can, for example, multiply the vertex vector by 2.0
, doubling the size of the mesh that is drawn to the screen.
varying vec3 vNormal;
varying vec3 vPosition;
varying vec2 vUvs;
void main() {
vec3 _position = position;
_position *= 2.0;
gl_Position = projectionMatrix * modelViewMatrix * vec4(_position, 1.0);
vNormal = (modelMatrix * vec4(normal, 0.0)).xyz;
vPosition = (modelMatrix * vec4(position, 1.0)).xyz;
vUvs = uv;
}
Or we can pass in the total elapsed time as a uniform, and transform the vector by sin(time)
, so it grows and shrinks in a pulsing manner:
varying vec3 vNormal;
varying vec3 vPosition;
varying vec2 vUvs;
uniform float time;
void main() {
vec3 _position = position;
_position *= 2.0;
_position *= remap(sin(time * (settings.fPulseSpeed / 10.0)), -1.0, 1.0, 0.7, 1.0);
gl_Position = projectionMatrix * modelViewMatrix * vec4(_position, 1.0);
vNormal = (modelMatrix * vec4(normal, 0.0)).xyz;
vPosition = (modelMatrix * vec4(position, 1.0)).xyz;
vUvs = uv;
}
To achieve the weird warping effect in the demo, we take the normal of the vector, multiply it by the sin
of time plus its position, and then add that to original position vector.
varying vec3 vNormal;
varying vec3 vPosition;
varying vec2 vUvs;
uniform float time;
void main() {
vec3 _position = position;
float ty = sin(
time * (settings.fWarpSpeed / 10.0)
+ localSpacePosition.y * localSpacePosition.x * localSpacePosition.z // Change based on position.
* 20.0); // And a constant, bigger means bigger ripples.
ty = remap(ty, -1.0, 1.0, 0.0, 0.2);
_position += normal * ty;
gl_Position = projectionMatrix * modelViewMatrix * vec4(_position, 1.0);
vNormal = (modelMatrix * vec4(normal, 0.0)).xyz;
vPosition = (modelMatrix * vec4(_position, 1.0)).xyz;
vUvs = uv;
}
Flat shading
If you turn off the flat shading option on the demo, you might notice that this warping interacts badly with the phong lighting that’s applied in the fragment shader. Or rather, it doesn’t interact at all; the diffuse lighting and specular highlights don’t take the distorted shape into account, and just act as though the mesh is still spherical. This is because the normal that we pass to the fragment shader is not changed in the same way as the _position
vector. This is, apparently, doable but complicated. A quick alternative is to derive a face normal in the fragment shader using the vPosition
varying. In the fragment shader:
/* We previously used the `vNormal` varying: vec3 normal = normalize(vNormal); */
vec3 normal = normalize(
cross(dFdx(vPosition.xyz),
dFdy(vPosition.xyz))
);
The dFdx
/dFdy
functions give the partial derivative of a value with respect to x
or y
, respectively. Given the position vector, dFdx(vPosition.xyz)
will return a vector in the x direction of the current triangle; and dFdy(vPosition.xyz)
will give a vector in the y direction. Taking the cross product of two vectors returns a new vector that is perpendicular to both, i.e. the call to cross
returns a vector in the z-direction of the triangle. We can use this as a normal of this face of the mesh.
For high-poly models, this looks basically the same as using the actual per-fragment normal, and gives us much better shadows. If you really lower the “triangle count” slider, though, it becomes clearer that every fragment on a given triangle has the same normal (and as such, the same lighting value), so you get an odd “flat-shading” effect.