let it leave me like a long breath

let it dissipate or fade in the background

Oct. 5th, 2020

Profile

xax: purple-orange {11/3 knotwork star, pointed down (Default)
howling howling howling

Nav

  • Recent Entries
  • Archive
  • Reading
  • Tags
  • Memories
  • Profile

Tags

  • art - 2 uses
  • asteroid garden - 4 uses
  • code - 19 uses
  • demos - 1 use
  • dreams - 5 uses
  • ff7 fangame - 23 uses
  • fic prompts - 13 uses
  • gamedev challenge - 82 uses
  • hell game - 76 uses
  • nanowrimo - 11 uses
  • plants - 9 uses
  • process - 52 uses
  • programming - 51 uses
  • screenshots - 5 uses
  • writing log - 83 uses

May 2025

S M T W T F S
    123
45678 910
1112131415 1617
18192021222324
25262728293031
    • Previous Day
    • |
    • Next Day

    Oct. 5th, 2020

  • xax: purple-orange {11/3 knotwork star, pointed down ({11/3}sided)
    Tags:
    • gamedev challenge,
    • programming
    posted @ 12:11 pm

    procedural skybox

    so, two week projects

    this one was actually a pretty big success! it's been hard to uhhh schedule anything these days, and going forward things are likely gonna be a mess for at least another month or two b/c i'm moving. but i wanted to get something nice and visual to show off, and one of the things i've been wanting to do for a while is shaders.

    so i made a skybox shader for my game!


    first let's maybe take a diversion into what a shader is, because people (well, ~gamers~) hear the word a lot but it's maybe not immediately obvious what exactly they are, aside from 'something to do with graphics'. you can skip this if you already know what a shader is.

    digital rendering is fundamentally about polygons, right? drawing triangles on the screen. that's a several-step process: placing some points in 3d space and connecting them together to make a triangle, figuring out what that triangle would look like on the 2d plane of the screen, and then filling in every visible pixel of that triangle with some appropriate color. these are all fairly tricky operations, and they're generally handled by the rendering library you're using.

    (this is a tangent, but for example, opengl has this property where edges are always seamless if their two endpoints are exactly the same, which is to say that the line equation they use to figure out which screen pixels are in a triangle will always be one side of the line or the other, with no gaps or overlaps, if you give it two triangles in the shape of a square with a diagonal line through it. they can seam if you generate geometry with 't-junctions', because then the endpoints aren't exactly the same-- this is something you have to keep in mind when generating geometry, & actually some of my map code does have seams because of the way i'm rendering edges.

    conversely, the old psx rendering architecture did not give this guarantee, which is why in a lot of psx games you can see the geometry flicker and seam apart with weird lines, which is where the 'is this pixel in this triangle or that one' test is failing for both triangles on what should be seamless geometry.)

    so this always involves a lot of really low-level math calculations -- when i say "figure out the position in 2d space", what that actually means is doing some matrix multiplication for every vertex to project it into screen space, and when i saw "fill in every visible pixel", that means doing a bunch of line scanning loops. these things were what the first GPUs were designed to optimize for.

    so, openly used to have what's called a fixed-function rendering pipeline: you basically had no control over what things happened during the rendering process, aside from some very basic data. for example, your 3d points came with a 3d vector attached that was its position. but you could also additionally attach another 3d point to each vector; this would be interpreted as a color: this gave you vertex-coloring. if you wanted to texture your triangles, instead of having them be a flat color, you could attach a 2d vector to each point; this would be interpreted as uv coordinates, or the place to sample from the currently-bound texture, and then it would multiply that sampled color by the vertex color to get the actual color. the final thing you could attach was another 3d vector, which would be interpreted as a vertex normal, and that would be used in the (fixed, hardcoded) opengl lighting calculations, to give you basic support for lighting techniques. there's a process in there that would look at the reference value for each pixel, and look at how they were positioned in space, and then interpolate between the values for each pixel. so this is how you got smooth shading, or working textures: because if you had one UV coordinate that was `0, 1` and another that was `1, 1` it would generate up some difference vectors and then while counting out each pixel it would provide that pixel with an interpolated UV, leading to different values (color, textures, lighting) getting sampled across each pixel. (actually iirc the opengl built-in lighting was gourad shading, not the more modern phong shading, and if you care about the difference you can look that up but this is already a really long shader explanation.)

    so what opengl supported, near the beginning was this: 3d points, colors, texture coords, lighting normals, and you could bind camera & perspective matrices, textures, and lights. those were the only options available, really.

    then opengl went up a few versions and decided to be more flexible. instead of all of those things being hardcoded into opengl itself, they decided, they would provide a programmable interface for people to write their own vertex and pixel code. this generalized everything here: instead of being able to send in only positions, or colors, or uvs, or normals, you could attach any kind of arbitrary information you wanted to a vertex, provided you could ultimately use it to output a position. and instead of pixels being rendered by applying these fixed transforms of vertex color times uv coordinate texture lookup sample times lighting angle, you could do anything you wanted with the per-pixel interpolated values, provided you could ultimately use it to output a final color.

    this blew things wide open. i remember back in the day when i had a computer old enough that it couldn't run shaders, and i couldn't run any newer games, and i was like, 'look shaders are all just fancy effects; i don't see why they can't just have a fallback mode'. and sure, if your shader is just a fancy blur effect you're probably still doing much the same kind of thing as the old opengl fixed pipeline: giving pixels colors based on base color and texture color and lighting level. but there's so much stuff that cannot possibly be replicated with the fixed-function pipeline. there's actually a really interesting talk by the thumper devs where they talk about how they make the track seamless by storing twist transforms in their shader code and applying them in their vertex shader, and doing most of their animations just by exposing a time value in their shaders (which is a very basic trick but it's one that was completely impossible in the old fixed pipeline).


    so i wanted to make a skybox shader, because so far i've just been drawing a flat blue void outside of world geometry. one of the things i've kinda been thinking about for a while is a game where you have to navigate by the stars at night (inspired of course by breath of fire 3's desert of death section), and that implies that you have 1. distinct stars and 2. an astronomically-accurate rotating skybox.

    so this is where the first bit of shader magic comes into it: i placed a quad at the far clipping plane, and then inverted my camera matrix so that i could push those values into world space and convert them into angular coordinates. by converting those to spherical coordinates, i could have a perfect spherical projection embedded into this flat quad, that would correctly respond to camera movements. keep that in mind for every screenshot i post about this: that this is a single flat quad, glued diorama-like to the far end of the camera; everything else is the projection math i'm doing per-pixel.

    so i queued up a bunch of posts on my screenshot tumblr; starting from here with the first attempts. (use the 'later' link).

    i wanted something stylized that matched the look of the rest of the game (so, aliased and triangular), so the first thing i wanted to figure out was how to draw a triangle onto the celestial sphere. this turned out to be pretty tricky, since shader code is pretty different from code you'd be writing elsewhere. like, if i wanted to draw a triangle polygon, i'd expect to calculate some points in space and then position lines between them. in a shader, you can't really do that -- or rather, it would be really inefficient. instead, you want an implicit equation, something that you can run in a fixed-form that gives you a value telling you how close the point is to the edge of the triangle, so that you can threshold that and get an actual triangle shape.

    i never actually got that working right, but i got it working kind of right and i figured, well, some distortion is fine.

    ultimately (as you can see if you page through all the posts) i got stars placed, constellation lines connected, the axis tilted, rotation happening, and even some very simple celestial object motion.

    the big flaws of this shader, currently, are that planets and moons are currently just flat circles. the major issue with that is that there are no moon phases, which looks weird as the moon advances relative to the sun in the plane of the ecliptic (the moon is up during the day when it's near a new moon, which irl we don't notice b/c the new moon is mostly invisible, but here it's still just a continual full moon). there's also no depth-testing the celestial bodies relative to each other; here i always draw the moon over the sun, and those are both always drawn over venus and mars, which are drawn over jupiter and saturn. so when planets are visible they're not actually properly occluding each other; when i was drawing things from in orbit around jupiter, i had to manually move jupiter up so that it would occlude the sun. there's also no 'eclipsing' code, which is an issue not just for solar eclipses, but also like, yeah on ganymede jupiter is continually eclipsing the sun, & between the 'moon phase' code and some 'eclipsing' code that should pretty radically change the look of the environment. but none of that is currently being handled at all.

    also i never did get the lighting angle correct -- at one point i got tired of the landscape being this fullbright shading regardless of the 'time of day' in the skybox, and so i pulled out some of the planetary calculations and just had the sun's position determine the light. this is another thing that's pretty hardcoded, right, since theoretically the moon should also be contributing light based on its phase, and if this is a jupiter-orbit situation where the sun is occluded then it should have its component cut out from the lighting calculation. not only am i not doing any of that, but i couldn't get the basic 'angle of the light in the sky' math right, so there's some weird displacement where the lighting doesn't really follow the sun all that accurately. i'm sure i'll figure it out at some point, and even just having any changing lighting looks great compared to what i had before.

    likewise i'm doing some 'day/night' calculations, but that's just slapping a blue filter over things and fading out the stars; i'd like to do something a little more in-depth.

    (also, since this game isn't gonna take place on earth i'd really like to add in some different stellar bodies -- more moons, different planets, that kind of thing. but that's its own project.)

    something else i discovered while working on this is that the gpipe shader code isn't very good. it aggressively inlines -- and i guess that makes sense because how could it not -- and that means that it generates hundreds or thousands of temporary variables. my shader code, which is fairly compact in haskell, undoubtably expands into thousands and thousands of lines of incoherent garbage in GLSL, which is an issue because that's the code that's actually getting run. somebody mentioned a vulkan library for shader generation which mentions the inlining issue, but i don't know if a similar technique can be ported back to gpipe, and gpipe itself isn't really being actively updated. there's no full replacement for gpipe around, but i guess if anybody does make a similar vulkan library for haskell i might be tempted to switch over, if only because the shader compilation is really bad. not unworkable, but probably i don't have an enormous amountof shader budget to work with, just because the transpilation into GLSL is so inefficient without some way to manage the inlining.

    anyway here are some references that i looked at while working on this:

    • the deth p sun art that was one of the original inspirations for this
    • a starfield shader tutorial that i used some techniques from, notably the dot-product noise function and also the layering (which is how i have big stars, small stars, and really small stars in the galactic band)
    • a shader tutorial that helped me w/ the line-drawing sections for the constellations
    • ofc i was doing all this on spherical geometry, not cartesian, so i needed to check wikipedia a bunch for how to do the equivalent coordinate operation: 'spherical coordinate system', common coordinate transformations, great-circle distance
    • some stackoverflow stuff: how to calculate distance from a point to a line segment on a sphere (this is used in the constellation lines, right, b/c the pixel is the point and the constellation line is the line segment)
    • i didn't want to go overboard w/ star simulation, but i did want some varying colors, so i looked at the blackbody radiation spectrum and eyeballed it to get a coloring function. i had this idea to mix in a second spectrum of 'fantasy' colors so that i might get green or purple stars or w/e too, but i never got around to that.
    • i wasn't actually sure if it was possible to write draw-a-polygon code in a shader until i saw this shader, which in drawing that triangle implements a general-purpose arbitrary-regular-polygon function. i couldn't copy it directly but it definitely helped guide me towards something that worked in gpipe + on a sphere
    • also a bunch of people in various discords and irc channels, b/c actually i'm not super great with math! when i was griping about the pinching at the poles due to lat/long, somebody offhandedly was like 'oh yeah try taking the sine of that' and it turned out that was exactly what i should do.

    all-in-all really proud of this! it's the first time i've done non-trivial shader work, and even though it still needs a lot of work it's a pretty good starting point. now the landscape looks way more like a landscape, and less like some geometry floating in a void!


    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
    • Previous Day
    • |
    • Next Day
Page generated Jun. 15th, 2025 12:18 pm
Powered by Dreamwidth Studios

Style Credit

  • Style: (No Theme) for vertical