oh boy code reportbacks
i'm trying this again! or, specifically, i've been getting frustrated with not making progress with various projects, again, so i decided to try something like a year-long dev cycle for _something_. i have some kind of 'engine' in place here but i've been really bad at fleshing that out with any content, so i was like, well, i might as well try to focus on doing something, anything, with this code over the course of a year, and see where it ends up at the end of that. (arguably one of the big issues here is that i don't have a clear goal in mind so i don't really have the motivation to move towards that goal instead of randomly picking at disconnected problems. anyway.)
but anyway so the thing i worked on for the past ~2 weeks (since the 24th) was mostly rewriting the shader. shader stuff is always kind of fun. for a very long time i was using the ancient fixed pipeline that forces you to use, you know, 3d coordinates + color + texture uvs + normals as your inputs, and first switching to modern shaders was kind of wild since... you can do anything. this one polygon piece on shaders got spread around social media recently b/c people objecting to some people saying the liquid was "fake", as if polygon representation of liquid surfaces is more 'real' than shader representation of liquid surfaces. as computing tech has gotten faster i've seen raycasting shaders become more mainstream, so you get more volumetric clouds or w/e but you also get more false geometry where you do raycasts from the polygon surface down to an implicit surface defined by some displacement values or w/e. so with a shader you can genuinely do anything.
anyway this isn't really anything like that! or rather, so, the old shader was all in 2d: i'd precompute the screen position of tiles, in pixels, and then have the shader just convert those into positions in the opengl -1–1 cube. this was mostly because before i was rendering everything in canvas, which did take the 2d screen position, so that was the quickest way to switch over to webgl. but now that it's a shader, i could do all sorts of stuff! notably i wanted two major things to be possible: camera rotation (not animated, just, it being able to happen without needing to totally reconstruct all the render buffer coords); and fragment dropping of stuff that occludes the player, as in "stop rendering things in front of the player", if they walk behind a big wall or w/e. (i didn't end up actually implementing either of those things, but i did all the infrastructural work so that now they're fairly minor tasks instead of needing to, uh, totally rewrite the shader.) the main thing there was that i'd need to send block data in 'world coords' to the shader, rather than in 'screen coords'. this is a very common thing to do in video games! you usually have, you know, a camera matrix that transforms coordinates from world space to perspective space and then to screen space. it was just very funny to have the same thing apply to this 2.5d isometric setup. yup, still need a camera matrix, although it ends up looking a bit different than a "normal" one.
this also meant splitting off some data. like i won't get into the super low-level details here but since each tile is actually a bunch of vertices that are positioned differently, i needed some other values to properly size and scale them, and those needed to be different than the world coords since you don't want to like, rotate the sprites, just the tiles that are represented by sprites. this is all basically a very specific application of sprite billboarding. but it's been very funny to me to isolate all these sub-values like "post-transform pixel displacement" or "size multipliers" from the actual raw world coords, since... yeah, it's a shader. you can dump in whatever information you want and alter it in any way you want.
anyway so now the shader looks identical to the old shader, but it does actual 3d calculations, which means i should be able to just change around the camera matrix uniform to rotate things (in discrete chunks of 90°, this isn't magic) or add in a 'focal depth' uniform to do clipping or cut-away renders for when the pc walks behind a wall or w/e else. so that'll all be neat! it also vaguely makes steps towards there being custom lighting, which is something i'd like to do at some point, but here i have a 32-color palette and so trying to respect that with lighting opens up a whole world of nightmarish post-processing paletted shaders that i really don't want to get into right now. suffice to say there are some options.
i'm trying this again! or, specifically, i've been getting frustrated with not making progress with various projects, again, so i decided to try something like a year-long dev cycle for _something_. i have some kind of 'engine' in place here but i've been really bad at fleshing that out with any content, so i was like, well, i might as well try to focus on doing something, anything, with this code over the course of a year, and see where it ends up at the end of that. (arguably one of the big issues here is that i don't have a clear goal in mind so i don't really have the motivation to move towards that goal instead of randomly picking at disconnected problems. anyway.)
but anyway so the thing i worked on for the past ~2 weeks (since the 24th) was mostly rewriting the shader. shader stuff is always kind of fun. for a very long time i was using the ancient fixed pipeline that forces you to use, you know, 3d coordinates + color + texture uvs + normals as your inputs, and first switching to modern shaders was kind of wild since... you can do anything. this one polygon piece on shaders got spread around social media recently b/c people objecting to some people saying the liquid was "fake", as if polygon representation of liquid surfaces is more 'real' than shader representation of liquid surfaces. as computing tech has gotten faster i've seen raycasting shaders become more mainstream, so you get more volumetric clouds or w/e but you also get more false geometry where you do raycasts from the polygon surface down to an implicit surface defined by some displacement values or w/e. so with a shader you can genuinely do anything.
anyway this isn't really anything like that! or rather, so, the old shader was all in 2d: i'd precompute the screen position of tiles, in pixels, and then have the shader just convert those into positions in the opengl -1–1 cube. this was mostly because before i was rendering everything in canvas, which did take the 2d screen position, so that was the quickest way to switch over to webgl. but now that it's a shader, i could do all sorts of stuff! notably i wanted two major things to be possible: camera rotation (not animated, just, it being able to happen without needing to totally reconstruct all the render buffer coords); and fragment dropping of stuff that occludes the player, as in "stop rendering things in front of the player", if they walk behind a big wall or w/e. (i didn't end up actually implementing either of those things, but i did all the infrastructural work so that now they're fairly minor tasks instead of needing to, uh, totally rewrite the shader.) the main thing there was that i'd need to send block data in 'world coords' to the shader, rather than in 'screen coords'. this is a very common thing to do in video games! you usually have, you know, a camera matrix that transforms coordinates from world space to perspective space and then to screen space. it was just very funny to have the same thing apply to this 2.5d isometric setup. yup, still need a camera matrix, although it ends up looking a bit different than a "normal" one.
this also meant splitting off some data. like i won't get into the super low-level details here but since each tile is actually a bunch of vertices that are positioned differently, i needed some other values to properly size and scale them, and those needed to be different than the world coords since you don't want to like, rotate the sprites, just the tiles that are represented by sprites. this is all basically a very specific application of sprite billboarding. but it's been very funny to me to isolate all these sub-values like "post-transform pixel displacement" or "size multipliers" from the actual raw world coords, since... yeah, it's a shader. you can dump in whatever information you want and alter it in any way you want.
anyway so now the shader looks identical to the old shader, but it does actual 3d calculations, which means i should be able to just change around the camera matrix uniform to rotate things (in discrete chunks of 90°, this isn't magic) or add in a 'focal depth' uniform to do clipping or cut-away renders for when the pc walks behind a wall or w/e else. so that'll all be neat! it also vaguely makes steps towards there being custom lighting, which is something i'd like to do at some point, but here i have a 32-color palette and so trying to respect that with lighting opens up a whole world of nightmarish post-processing paletted shaders that i really don't want to get into right now. suffice to say there are some options.