render buffers
so it's been a while
my two-week project schedule got totally wrecked by covid stuff and uh as that stuff is still ongoing it's hard to really tell to what degree i'll be able to get back on track with actually, you know, doing things
but eventually i got frustrated and started working on stuff again. i actually did an enormous amount of work on that really complicated combinatorics problem i've been struggling with for a while, and made some big breakthroughs with that, but it's not 100% complete so i'm waiting to post on that until i can actually show it off.
anyway what i did over these past two weeks has been work on render buffers.
so when you want a program to draw anything to the screen, you have to give it that 3d information: here are these vertices, and these three are connected into a triangle, and so forth. the vertices also tend to have data attached, for lighting or texturing or coloring. material data, that kind of thing. and it's not enough to just have that allocated in memory; due to all the optimizations you see on videocards, you need to feed this data to the videocard in really specific ways. enter the render buffer. a render buffer kinda represents a chunk of memory that's structured in such a way that it can be easily fed to a GPU or stored in video memory and then processed by a game's shaders and ultimately rendered on screen. haskell is a very 'high-level' language, but to effectively do rendering you need to kinda dig down into the low-level stuff again;' this is in large part what libraries like gpipe
are doing: giving a haskell gloss over some very low-level rendering operations.
when geometry changes in the world (like, by terrain deformation) you'd ideally want to be able to erase the old geometry and place that new geometry quickly, by only writing to a small portion of the render buffer. however, a render buffer doesn't really come with an index; all it has is raw vertex data. if you want to related those vertices to some kind of game knowledge, you have to do that yourself. so it's much simpler to overwrite the entire buffer.
this is also made more complex by size constraints: if you have a flat surface that gets dug into, maybe the flat surface took up only 4 vertices, and the dug surface takes up 16. now you can't just blindly overwrite one with the other, or else you'll corrupt other geometry, since the replacement is bigger. if this is a game where the player can construct arbitrary geometry, either by minecraft-style placing blocks, or more indirectly by placing constructed objects, then you end up with maybe having to handle render writes from arbitrarily-complex geometry. so it's much easier conceptually to just clear and regenerate everything.
for example, one of the reasons minecraft can be so slow is that it stores geometry in 16x16x16 cubes, and any time any block in that chunk is changed it regenerates and rewrites the entire chunk geometry to its render buffer. so if you have a bunch of complex blocks that all update rapidly, you end up with a chunk that's needing to be regenerated nearly every frame. this is part of why you can have 'laggy chunks'.
so, there's a tension here with your rendering buffers: you want to be able to only write small updates, as needed, but then you need to do a lot of math to keep things squared, and ultimately you might need to reallocate a larger buffer and copy everything over. but that requires bookkeeping math. conversely, you could allocate oversized buffers, which waste space, but that would let you get away with less bookkeeping and mean you'd have to reallocate less often. there's no perfect solution, because there's a tension between wasted space from too-large buffers, the cost of reallocating, and the cost of your own bookkeeping code.
what i wanted was just, buffers that for the most part automatically Work when i push render data into them, without me needing to spend a lot of time manually calculating out index offsets and checking for buffer limits. when i started working on haskell rendering again, i kind of decided that i was done with doing anything low-level; that what i wanted to do was make something that properly encapsulated the actual low-level stuff in an abstraction that actually works.
i got that working, is the takeaway. i did some sync stuff and some threading stuff and some indirection, and a bunch of low-level index management, and now for the most part i can automatically generate and update render buffers piecemeal in an efficient way while externally having a pretty idiomatic haskell interface. that's neat!
also to test that i added some new kinds of world geometry to the maps. previously all tiles were smooth slopes with hard edges, and i changed that so that now hexes can be split in two or in three or into different kinds of steps, which (once i actually manage to work it into the generation/smoothing step) can add a lot of visual variety to the landscapes. obviously nowhere near as much as i'd like, but way more than was previously possible.
anyway this basically lets me shove in arbitrarily-complex landscape and tile occupant geometry without having to do any counting or worry about allocations or data corruption. this will make adding new render stuff (currently: plants, rocks) a lot easier.