i ended up writing a todo list when i started and took some degree of day-by-day notes for this project, so i figured i might as well put together a writeup.
- Aug. 13th, 2019
-
Tags:
- Aug. 9th, 2019
-
Tags:
so, i started doing two-week projects again.
this time they're more... this is, i guess, explicitly gamedev work for an actual game? instead of random noodling. so the list of prospective things to work on is like, "picking", "map generation", "interface", "timing and animations", "more efficient rendering", which will hopefully over the course of the next, uh, year-or-so, shade into actual gameplay and mechanics stuff rather than just mostly engine stuff.
this two week project was picking
screenshots on monsterpit:
screenshots on tumblr:
if you're not up to date on your programming lingo, 'picking' is the term for locating stuff under your cursor. so like, being able to click on something in a 3d environment. picking for ui is very simple, since like... you can get the pointer cursor position, and do a super-simple 2d collision vs. whatever rectangles or circles that provide interaction. simple. doing it in 3d is more complex, and it's in two parts: one is to figure out what in the environment the cursor ray hits, but before that you have to figure out what the cursor ray even is. i'd never really tried doing picking in its most general form before, because it involves a lot of confusing matrix math, so it was something really intimidating and i was kind of resigning myself to two weeks of struggling with math and concepts before eventually producing a super buggy and half-working prototype
so it was kind of a surprise when two days in i had written the unprojection code and could reliably hit a target on a grid.
the full pipeline here is: take the cursor position. turn that into a 4d vector in clip space, as a point on the camera near plane. the clipping frustum is always a cube, so you can calculate that pretty trivially.1 it's also very easy to get the exit point of the ray: that's just the same point but on the far clipping plane. then you need to 'unproject' the point by multiplying it by the inverse of your transformation matrix. this performs the inverse function as normal rendering: instead of transforming coordinates in world space into coordinates into opengl clip space, it transforms coordinates in opengl clip space into coordinates in world space. after you've done all that you have two points that represent 1. the point in world space that's closest to the camera and directly under the cursor, and 2. the point in world space that's furthest from the camera and directly under the cursor. assuming your space isn't curved (which it won't be unless you're doing super weird spatial geometries), if you subtract one from the other you get a starting point and a ray, which you can use to shoot through the world environment to see what the cursor could possibly be over.
i was kind of vaguely aware that that was possible, but it wasn't anything i'd ever come close to doing by myself before.
i ended up reading this tutorial and the opengl documentation for unproject several times.
things that stumped me for a while:
- the clipping frustum is centered on 0,0,0 and goes from -1 to 1. notably, this means that the near clipping plane is at -1, not at 0.
gluUnProject
takes z values from 0 through 1 but it shows you in the documentation math that it transforms them to the range-1
through1
. - 4d coordinates are 4d. one of the biggest issues i had was that my screen position and ray values seemed to be scaled weirdly, and i ended up taking several days tweaking a weird magical scaling factor that i didn't understand before ultimately realizing that i wasn't normalizing the vectors by dividing by w. after i did that all the math became very simple.2
- 3d perspective accounts for a lot. i was rendering the raycast output as projected onto the grid for a while, and the output always looked super weird and wrong until i started rendering it as the actual hex prism volume hit for each hex that the ray passed through, at which point i realized it had been right all along.
once i had the ray it was a fairly simple job to cast it through the grid, although that brought with it the usual imprecision issues that i still haven't totally fixed. something like bresenham's is nice because it's an algorithm; when you're doing fully general geometric collision through a grid there's always the potential to slip through some vertices due to imprecision, and that'll mess up the entire cast. that's one of the remaining problems, although uh looks like i have a whole six days to resolve the final few bugs. hopefully this won't be one of those "half the time spent working on 90% of the problem and the other half of the time working on the other 90% of the problem" situations.
more generally, i'm hoping these two-week projects help... focus me? like this time i guess i really am committing to working on an actual Game Project, instead of dozens of different libraries and diversions. i could see how that might amp up my natural tendency to get frustrated. what i found during the last year of projects was that that two weeks is just about exactly enough time for me to get frustrated and annoyed with something, before i run out of time and basically have scheduled permission to stop thinking about it entirely and not actually care that much if i didn't get much accomplished, and that really helped a lot in that it gave me permission to stop caring about a project. i frequently have a big problem of constantly obsessing over a project and the act of perpetually worrying exhausts all the energy i have to work on it in the first place, so i end up constantly worrying about something but never actually making progress.
but the other half of it, which i'm becoming more conscious of now, is that in addition to giving me an automatic escape hatch for when i get bored, is that they also focus me on a specific thing? with other game projects i've seen the vast spread of Stuff To Do and tried picking away at the margins slowly over months and made no real progress. but here i was just like "oh there's so much to do and there are so many different categories of stuff i need to work on" "oh i could make a two-week project out of each thing". picking is something that i've had looming over me as a Complex Math Thing Needed For Games That I Have No Clue How To Do Properly for years, and here i was just like, well, i'll work on it for two weeks and see how it goes. and it turns out that that was something i could make a big headway on in two weeks. that was a thing i made a big headway on in two days. but it never would've happened if i was trying to do picking and ui and better shaders and timing and animation and game mechanics all at once.
so i guess we'll see how i feel about all of this in another year, when i may or may not have a list of 24 items i worked on.
also i'm technically posting this a week early but that's mostly because uh i'm hoping the rest of the picking stuff is just mopping up some of the raycasting glitches and hex prism collision stuff, which will probably be pretty boring since that's all ground i've covered before. i may or may not write a follow-up post on the 14th/15th when it's actually 'due'.
- i got very used to thinking about the camera as a point, when this made me realize that of course it's not just a point; the camera is basically the quad of the near clipping plane. a lot of my thinking was predicated like "well all the rays are shot out of the camera so the starting position of the ray should always be the same unless the camera moves", when of course that's wrong: the position can be any point on the near clipping plane. sure, you could say the camera is a point and the near clipping plane is in front of it, but that's not really... useful. ↑
- when you have a 4x4 camera matrix, the fourth coord is called
w
and there are a bunch of talks about what that weird fourth dimensional value actually means. "it should be 1 for points and 0 for vectors" is generally true, but it wasn't until i did all this math that i really understood why that was true. all of this matrix math is to simulate the perspective effects of cameras, right? which includes the thing that we take very much for granted, which is "things that are further away are smaller". in the matrix math, this is done by literally scaling things down when they're far away. similarly, since the clipping frustum is always a cube, if you want anything other than a 45-degree view angle the perspective matrix also stretches out everything horizontally to give a 'correct' field of view. these are transforms encoded in the camera/perspective matrix that wildly squish and squash things around to mimic the perspective effects of 3d space. here, whatw
encodes is the degree of perspective scaling: a point will have a largew
if it's very close to the camera, and it'll have a smallw
if it's very far from the camera. those points will be in something like clipping space, which is usually not what you want, so "points should have aw
of 1" is just telling you that if you have a point in clip space you can normalize it with itsw
to get a point in world space, which is usually what you will want. ↑
- the clipping frustum is centered on 0,0,0 and goes from -1 to 1. notably, this means that the near clipping plane is at -1, not at 0.
- Jun. 13th, 2019
-
Tags:
2 week project
kind of been unsteady on these for a while. the last one (may 15 to may 31) was kind of a haphazard mess. i implemented some more operations for my polyhedra data model format, which notably included "loft", which turns out to be very close to a general-purpose extrude. i used that to make some simple "tree" shapes:
* trees 1
* trees 2
* trees 3
but ultimately i got kind of frustrated with the super simplistic shapes and kinda stopped working on that. (ideally i'd get some more structured face splitting code working so that i could e.g., split trunks and branches at arbitrary angles, so that i could hook it up to some kinda l-system and generate continuous models)
so that was late may. early june i decided to focus on something else and i worked on wang tiles. the end result of that is this page (which apparently doesn't work in chrome but look i've tested it in firefox and midori so like, w/e sorry you use a bad browser) which consists of two parts: one is a totally unconstrained tiling that just does some grid math to make a perfect tiling, and the other is a 'nested' tileset that uses a backtracking solver that i wrote and is moderately decent at figuring out how to lay out some tiles. i was intending to get to manual specification of tiles, since that could use the same solver; it's just UI to handle it, but uhhh html/javascript ui is kind of a mess to write and i'm not good at it, so, oh well i'm not gonna get around to that right now.
the main thing that i learned from this is that a naive backtracking solver is just, total garbage. my first attempt at a solver was really simple: for each tile, pick a random tile from the tileset, but constrained so that its edge match with any adjacent tiles that have already been placed. if there aren't any tiles that match, reject the last tile placed and pick a new one. that algorithm worked sometimes, but it got stuck very easily and frequently got into situations where it would have to exhaust its way through dozens of tiles (which would take thousands of backtracking steps) to get to the problem tile. i considered a few options, and inspired by the WFC constraint algorithm i decided to try precaching options: when a tile was chosen, it would check it against all its adjacent unfixed tiles; if that tile lead to a situation where no tile was possible, it would get rejected, and all those partially-constrained tilesets were cached for later when those tiles would get resolved. this made it much more likely to find a solution, though there were still a bunch of cases where it'd get stuck and fail. if i was gonna keep messing with it, i think the next thing i'd try would be having it unplace the adjacent tile with the most options available, or something like that, instead of it unplace the last tile (since frequently that's not a tile that's actually adjacent to the problem). i don't really know if that would meaningfully change anything, but, doing that is how we find out.
(wang tiles can be used for map generation, as in this video, but i'm more familiar with using them as a tileset placing thing. if e.g., you have a bunch of 16x16 pixel squares that are, like, cave walls, and some have vines running down and some have waterfalls running down and there are various different kinds of rock shapes, it's possible to assign them wang colorings (e.g., green edge if a vine runs off, blue edge if water runs off, grey edge otherwise) and then use a tile placement solver to generate a random placement of the tiles in a way that maintains coherency. you see this a lot in like, children's matching games where you have a puzzle and you want to match up all the edges correctly.)
- May. 14th, 2019
-
Tags:
okay two week projects
i uh really intended to do more for this two week project but these days i am constantly exhausted by the nightmare that is the ongoing end of the world and also was actually physically sick for about half of the timespan of the two-week project sooooo i didn't get much done
mostly what i did was go back to the winged-edge polyhedra format and tweaked it a little. i wrote some code for directly rendering a polyhedra to gl buffers, without taking an intermediate step turning it into turtle drawing instructions, since that seemed... very unnecessary. i also put together some polyhedra house models. just the same kinda stuff i've done before: flat roofs, skillion roofs, gabled roofs, etc. or okay the full list is: flat, skillion/mono-pitched, gable, saltbox, butterfly, hipped, square hipped, sprocket, gambrel, double-gabled, cross gable, and rhombic roof types.
current rendering is complicated somewhat by how some of those contain convex faces, which i can't render correctly yet. there's apparently decent algorithms for triangulating arbitrary polygons but i haven't yet figured out how to implement any of that, so, oh well.
the goal is to get to the point where i can do arbitrary CSG actions and construct much more complex shapes by stacking a bunch of simple house outlines objects together and adding decal elements (railings and the like). basically redoing some of the stuff i did with those svg houses, only now with a better model format. but it turns out arbitrary CSG stuff is, you know, hard.
first house renders
second house renders
third house renders
that's all for now. next up: graph embeddings on polyhedra, i guess.
- Apr. 15th, 2019
-
Tags:
graph embedding
two week projectssss
these past few months have been kind of erratic wrt actually working on a concrete project. the two weeks before last (mar 16-31, i mean) i didn't actually do anything. kind of a mess.
but these two weeks (april 1-15) i did do something! specifically i got an embedding using loose edges working. previously i had gotten a very basic render working, and used it to draw some 'circuit' dungeons, but i never got the embedding or the expansions working, which meant that i couldn't actually generate a dungeon yet, i could just generate a template. these two weeks (and a little in late february because like i said i haven't been cleanly demarcating these things that well) i wrote an embedding that, for the most part, actually works.
here are some screenshots before i get to the part where i explain what still doesn't work- first embed test
- less buggy; no collisions
- room/room collisions
- denser graphs
- a false confidence that the code had all its bugs worked out
- first generations after monad generalization
- after big edge collision rewrite
- final edge generation
so there was a lot. most visibly, now there are hallways! that's a big step. what this means generally is that edges are now treated more-or-less the same as nodes, rather than being mostly ignored -- previously, edges weren't treated as things that could have collision, or store data, or have their changes be provided to the embedding, or anything like that. this also radically changed the way the hex embedding works, since in order to properly fix edges and nodes in place i needed to change the way it processed all the changes a lot.
but also while in the process of doing all that, i generalized the graph grammar code so that it wasn't always running in random. now it can run in any arbitrary monad. that's neat because it radically expands the ways in which this code can be used -- for example, it would be possible to run it throughIO
to get an interactive graph expander. this also means eventually i'll be able to pull out theacc
type variable (the graph accumulator state) and just use some kinda state monad, which will help simplify the types a little. a graph grammar's type is currentlyGraphGrammar m p q acc gr n e
, and like, anything i can do to remove type variables from that is great.
but what that, and also some of the extensive debugging i did for the edge code, revealed to me is that this code is just lousey with strictness problems. in my mind, the code was all very simple and elegant: when looking for an expansion, the graph generator is capable of generating a lazy list of all possible expansions, which it tries one by one until it gets a match. then the rest of the (very long) list gets discarded and never has to be evaluated. as a part of that, too, we generate a graph embedding for every possible expansion, and go through the list until we find one that works, and then we discard and ignore the rest. if those are actually lazy lists, it's fine. it's a nice clean way of using haskell's laziness to our advantage.
but as you might be getting an inkling, that's not how things actually go. these lists of expansions are sequenced, which means we turn[m a]
intom [a]
. and because all of the items in the list might be running code in the monad, we have to evaluate the entire list. this means that this code needs to fully evaluate every possible expansion and embedding that's considered, even after the code has successfully found a match, because it's possible for the unpicked expansions to change the monad state (here it's random seed, but it could be anything). this is a big problem and it means the code is doing like, 10x, 100x, about as much work as it really needs to, and probably goes a large part towards explaining why it's so slow. so. that's not good. but it is nice to know about, because this is a huge place to try some kinda optimization, at which point this code will probably be, uhhh, way way faster. i just gotta figure out some limitations that would make that change possible. we'll see how that goes.
in the mean time, the loose edges thing has definitely fixed one of the major problems of the generator. it's possible to make a dungeon with a skeletal 'starting' graph, that correctly positions various important rooms, and then goes on to subdivide without crushing everything together and perpetually making the level generation a big tree with no real interconnections. this can be really useful, though i'm only using it in the most basic ways so far. one of the things that this enables is proper graph nesting: making a big 'world map' graph, where each node in that graph is then expanded out using its own custom generator for its region type (e.g., city, cave, forest, etc), since i could write a skeleton-graph generator that places all the important outgoing connections to other zones in the right places and then trusts the graph expansions to turn that basic web into something a lot more complex. so that's very enthusing.
that being said, the embedder that i wrote still has a lot of limitations, and now even though my expansion code can handle a lot more contextful expansions, the embedder can still only place nodes very simply -- if you have a new node that's connected to three or more already-fixed nodes, then it'll always fail to embed that node. that's a big issue for things like lakes, rivers, or coastlines, that need to have structural edges in addition to their 'walking' edges to keep the graph layout coherent. that's probably the next big step to take, before i can demo out actual 'world maps' that e.g., start with lakes and rivers and canyons and coastlines and coherently keep water routes flowing all through them. that'll be neat to do, though.
definitely not for the next two-week project though; i'm really burnt out on graphs right now.
- first embed test
- Mar. 14th, 2019
-
Tags:
okay so, graph code
my two-week projects have been kind of erratic recently, or maybe i've been biting off increasingly more as i get more and more settled into doing them (i mean i have been doing them for more than a year now, which, is a while). lots of things that aren't really finished.
anyway this time i wanted to improve my graph generator by adding 'loose' or 'stretchy' edges
so in the abstract, a graph grammar system is incredibly flexible and very powerful. however, my implementation of graph grammars, and the specific expansions and embeddings that i'm using, are really quite limited, and they hamstring the system a bunch and limit me to only generating a pretty constrained subset of what's actually possible.
the biggest stumbling block i've been having is the case of distant nodes. there are three parts to this: one, the embedding code needs to generate an embedding as the graph is expanded, rather than all at once at the end -- this is so impossible graphs are rejected instantly when they happen, rather than maybe staying around for a lot of further calculation before needing to be rejected; two, my embedding never moves already-placed nodes around, it only places new nodes -- this is because moving nodes arounds is very difficult to do algorithmically and would require a lot more code to do physically; and three, in this embedding all nodes with an edge between them need to be adjacent.
what this means is that if a new dungeon graph is started with a start and an end room, then those rooms need to be adjacent, and no further graph expansion can push them apart. expansions might sever the link between the start and end rooms, for example to place a hallway between them, but that would just add a 'hallway' room that runs along the two rooms, not a hallway that pushes between them to separate them.
(this also comes up if/when i get graph embedding on a larger graph working -- when i was working on polyhedra, it was pretty trivial to make a graph out of a polyhedra by assigning a node to each face and an edge to each edge, and then it could be possible to embedd e.g., a world map graph onto an arbitrary polyhedra. but that would have weird coverage (like the world map only spanning half of the actual world shape, or w/e) unless it was possible to kind of knit a 'minimal' graph across the entire surface, say by generating a subdivided polyhedra and then using the original, non-subdivided polyhedra as a graph overlay, and then expand from there. but that would generally require having nodes in the knitted graph that have edges between them, despite not representing faces that are edge-adjacent on the actual subdivided polyhedra.)
so what i really wanted to do was add some form of 'loose' edge, that denotes connectivity but that doesn't imply that the shapes need to be directly adjacent. then i could generate a dungeon that's got a start-to-end initial graph with a big long loose edge between the rooms, and so then graph expansion could start by cutting into that edge.
(this also also comes up when thinking about graph subdivision -- generating one big 'world map' and then having each node in that graph generate its own area graph. if the world graph spits out, say, that the forest is connected to the town and the canyon and the river, it's difficult to figure out how to generate a starting graph that correctly places the respective exits to the town, canyon, and river areas while still retaining enough free space for the deck of 'forest' graph expansions to do something interesting. but if it's possible to just place those nodes correctly to start and hook them all to some central room with loose edges, then the graph is still very minimal while still having the placement guarantees that are needed.)
it turns out for dungeon graphs, what a 'loose' edge represents is pretty clear already: it's a hallway, and an expansion that happens along that edge can just place the new room somewhere where it overlaps with the hallway, and then cut the hallway into two pieces.
so this two week project was about doing that. i did... part of it?
partly the thing is that since haskell lets you divorce code so thoroughly, this was really about changing like three separate libraries:Data.GraphGrammar
itself needs to start caring about edge data, and it needs to do things like e.g., provide a lens for embedding edge shape data into the edge type, in addition to providing a lens for embedding node shape data into the node type. it also needs to start tracking new edges, and providing them to the embedding function in the same way it tracks new nodes -- basically it's been treating edges as unimportant, second-rate values for a while since i haven't ever actually been using edge data for anythingShape.HexEmbedding
needs to figure out how to do something with that new edge data. this means a new type (LooseEdge Hex
, specifically) for use as edge data, plus new hex collision code inShape.Hex
that can handle the relevant shapes. my currently-existing collision code is incredibly slow in all cases save for hex/hex collisions, so i need to figure out, at the very least, good hex/line and line/line collisions.Generator.DungeonGen
needs a new generator that actually uses these new types to design new kinds of dungeons that use loose edges.
also, when i actually render these things,graphToSVG
needs to understand what the new edge data type means and how it can be rendered correctly.
so i ended up doing the first item entirely, which is just abstract data-shuffling, and parts of the second and third items -- i have some new collision code written, but not enough to actually use, and i can successfully generate an initial graph for a circuit-based dungeon (or, theoretically, a linear start-to-end dungeon). i also figured out the rendering for hallways, which is neat because these are officially the first concave shapes that this setup can render!
i have some in-progress screenshots of what those graph outputs look like:
- Mar. 1st, 2019
-
Tags:
uniform-tagged shaders with gpipe
okay, two week projects
i didn't do something for february 1-14 since i was sick for that entire stretch (i was sick for like three weeks and it sucked), but i did get some stuff done for the 15-28 period.
gpipe has this enormous infrastructure for type-safe shaders, so that you can never write the wrong kind of vertex pipeline to a shader, or write the wrong thing to a buffer generally, and all of its weird internal types are instanced to a mess of typeclasses so that they get marshaled correctly automatically. it's real nice. but what they don't do for you is handle uniforms. or rather, they handle uniforms (you can write to them and use them just find) but they don't have any kind of construction for "this shader uses uniforms of type x y and z and so before you run the shader you have to provide x y and z values". basically they don't really surface the shader dependency on uniforms into the type system at all; a shader is just a shader and if you need to write uniforms for it then you need to manage that yourself.
in my existing code, the only uniform i really used was the camera, which was updated per-frame at the start of each frame, and while i had lots of ideas for useful shaders i could write, i basically had no way to store or structure them. my code ran using a list ofRenderUpdate os
values (whereos
is a phantom type value that represents which rendering context the render event came from), and what that means is that if i wanted to keep that same basic infrastructure of having a render cache full of render actions, i would need to hide all the shader information, since haskell doesn't have heterogeneous lists -- i couldn't have a list of like,RenderUpdate os ShaderFoo
andRenderUpdate os ShaderBar
.
so already i was thinking this would have to be an existentially-quantified thing: have something likedata RenderObject os = forall s. RenderObject { shader :: CompiledShader os s , chunks :: [RenderChunk os s] }
so that the shader could vary based on the type of the chunks, but that would be hidden inside the type so that the overall type is still justRenderObject os
, so i could continue using a simple list of render updates. additionally, i'd need some way to attach uniforms to shaders, which would be an entire other step that would need to be done in a similar fashion -- stuff the uniform type in an existential somewhere, so that i could automatically set them in some fashion.
(i should also say that i put off working on this for a while since i only had a hazy idea of how this would work, and i figured it would be really complicated and involve a lot of weird type stuff.)
so with all that being said let's talk about the code i wrote.
( lots of code and code talk under the cut )
- Feb. 2nd, 2019
-
Tags:
i am hopefully gonna make a longer post about this later but
this latest two-week project was about finally getting forms (and more generally, a higher-level i/o event framework) working in haskell. this involved grappling with the reactive.banana FRP framework i'm using, and figuring out how that combines with gpipe -- they both need to be run in an IO monad wrapper, so they can both lift pure IO into their own wrapper but they can't lift each other, which adds some wrinkles as to when which code can run -- and how to structure these things overall so that the handlers i write automate away the most tedious parts of event management (mainly individually checking click volumes) while still being flexible enough to do useful things
basically this entire two weeks was spent trying to get this framework to the point where i can hit tab to open a menu and hit tab again to close it. it's like 90% of the way there (closing a ui element doesn't wipe its used render indices, so it remains floating, uninteractable, on screen) so i'm calling it a success.
- Jan. 14th, 2019
-
Tags:
2week project reportback
so what i worked on this month was mostly polyhedra (again), this time with an emphasis of making more complex shapes from a bare polyhedra
the first thing i did was get rectification working. it turns out that i had already implemented two general polyhedra operations: duals and kleetopes, and when you add rectification to that you can perform a pretty wide variety of polyhedra operations, to the point where the original conway polyhedra notation is built using only four operations: duals, kleetopes ('kis'), rectification ('ambo'), and 'gyro'.
i already kinda mentioned that part in previous posts.
so in the past two weeks:- i got rectification working
- i started manufacturing rectified polyhedra
- reimplemented height-formation on a polyhedra (1 2 3)
- added procedural palettes (1 2 3 4)
- added translucent water layers
- added tile decor (wip)
so that's pretty nice, though a lot of that was easier because i'd already muddled through it on flat maps.
one of the big remaining issues is figuring out generation on polyhedra -- it's easy enough to extract the connectivity graph, but coordinates are tricky to figure out, and without them it's difficult to figure out direction or shape or outlines etc, which are all needed for even the most basic generation code (unless i want to go with pure noise, which, i don't)
i'll probably make a longer post sometime soon about what i want to do with this going forward
- i got rectification working
- Jan. 1st, 2019
-
Tags:
oh right two week projects
this last one was polyhedra. as you probably noticed.
all my prior 3d stuff was in the form of triangle soup -- just a bunch of unstructured polygons with no connections between them. that's fine for relatively simple things, but i've been wanting to get more into csg for a while now, and that's something that it's very difficult to do with just a bunch of polygons (difficult to tell what the 'inside' of a polyhedra would be, etc)
also i wanted to generate more complex polyhedra, specifically, and to do that you need to know more information about the overall structure of the polyhedra -- which edges connect to this vertex; what are the adjacent faces to this face; that kind of thing. so that lead me down the path to the winged-edge model format. it took me uhhhh a while to figure out how to think about models in this format, but i think i have it more-or-less down at this point.
the eventual goal is to use shapes in this format plus l-systems to synthesize solid models for stuff i want to render (plants, houses, the usual), but for now i'm mostly just, uh, rendering polyhedra. i still need to implement a lot more operations before i can robustly connect these shapes, but maybe not as many as i thought before. i ended up implementing duals and kleetopes, and i wanted to implement rectification, and when i found that chart i realized that just from duals and kleetopes i can truncate arbitrary polyhedra, and all i'd need to get rectification working is by writing 'join', which i already figured how to do from trying to get a cube subdivided into a rhombic dodecahedron (you get the kleetope of the cube and then you cut every original edge, which performs the join operation. the main issue with this is figuring out the precise heights of the new points so that all the faces remain planar)
so all of this isn't super visually complex yet but it's a decent foundation for finally rendering stuff that's not just prisms and prismoids.
what i also found interesting was that when i was doing my landscape rendering and i did the whole half-offset hexagonal grid thing (boy that link's gonna rot once i delete that tumblr too) there's actually an analogous operation on a polyhedra. i can make big polyhedra with lots of hexagonal faces (and some number of pentagonal faces) but i kind of wanted them to be offset in the same way, so that there were lots of straight lines along the grid. turns out it's literally the same operation, rectification, and it's not entirely out of the question that i could implement that. so that's nice.
the other thing i wanted to make big complex polyhedra with a solid structure for was to see if i could extract their face data as a graph and use that to run my landscape generation graph algorithm on them. i think... the way it would have to work would be very different on a planetoid vs. just slapped onto an infinite plane, but it's still potentially doable. i really need to update my graph generator setup before i do anything with it, since it's always been a little too simple and poorly-designed to do complex generations in any kind of time.
- Dec. 15th, 2018
-
Tags:
oh right two week projects; i'm still kind of doing those after nanowrimo
on like the first and second i was like, okay sure, do a little rendering stuff, get some better shape rendering stuff, maybe world generation stuff like cacti, etc.
thennnn tumblr announced its porn ban
and like i haven't really been using tumblr for a while now, so that doesn't really directly affect me (aside from obliterating all the porn artists i was following) but it got me thinking about actually permanently moving my content somewhere, and centralizing it instead of having a bunch of different sites that all get a different sliver of my work. i was like, hmm, but making an entire CMS/blog/art site framework is actually a big challenge? but then i remembered that i had already done it years and years ago.
so i used to write php and way back in the day (for nanowrimo 2009, actually) i wrote a little database-backed web thing where you posted stuff into a text field and it stored the text and the wordcount and generated graphs for words written in that day and average words written per day and all sorts of stuff. and then it ended up as kind of a fic WIP repository. and eventually (2013 and 2014) i was like "php is pretty terrible and i should rewrite it in something else", and dug uphappstack
and got parts of it working in haskell.
and so over the past two weeks i've been working on that and getting it into shape. mostly just getting it to compile, because i updated it to no longer be running on libraries from 2013, but then also i added in rss feeds for tags and individual posts, and hacked in some more complex tag structures (previously it was set up with danbooru-style tag implications, and i hacked that up to work more like file directories) and i looked at what would need to happen for it to support ActivityPub (the thing that makes mastodon / pleroma federate), and it turns out that's not super complex so i might add that in too.
it all runs, so i could theoretically dump it onto the tzaphiriron.sidemoon server and give a live demo, but 1. there're a lot of old fic wips that i don't actually want to release and 2. doing that would require upgrading the hell game webserver also, since the qliphoth site runs with a modern haskell platform, but the hell game site is built with a considerably more out-of-date one, and upgrading that would mean dumping the old half-finished hell game update out there.
we'll, uh, see how much work i want to put into hell game right this second in order to make qliphoth run.
also since i have no clue how to manage auth stuff, the qliphoth setup is currently nearly entirely unsecured. or rather, the auth setup is just a check to see if the ip is127.0.0.1
, and if so it generates pages with permissions, and if not it doesn't. i'd like to get that at least a little more functioning before i host the server and release the code.
(one of the things i realized was that if i do activitypub federation i'm going to need actual permissions controls even if it's only ever set up as a single-user server, since ofc once people subscribe to feeds there's the question of what view permissions they'd have. so just making a server to only host one person's files with no social aspect doesn't actually reign in the permissions/social complexity much once you start to support any kind of activity stream.)
( some screenshots with nsfw text in them )
- Oct. 15th, 2018
-
Tags:
okay so two week projects
i didn't write a writeup for the last one so this is two-in-one.
so for the latter half of september i tried out procedural magic systems, and ultimately i wrote down a lot of notes and concepts but i didn't get a lot specifically done. part of the reason why is that i tried using my possibility space code to handle it, and actually this is something that it's remarkably ill-suited for
the possibility space code really works the best when there's a really large possibility space and you just want a handful of mostly-unique items. with a procedural magic system, there are a lot of complex constraints. there's lots of things like "for each rank in the magic system, it should usually generate one of each magic type (healing, control, summoning, attack, etc) available in the system, and it should usually not generate two spells of the same type", or "when generating magic effects, it should usually either reuse the same adjective often or never (e.g., holy healing vs. holy bolt), but not sometimes" that the possibility space code really couldn't handle very well. it's definitely possible for the possibility space code to have some part in generating systems, but it definitely can't handle the main generation logic.
so basically in working on that, i got to the point of realizing that and stopped, because i didn't want to totally start over w/ some bespoke haskell code. but this is something i'd want to revisit at some point, b/c, well, hey procedural magical systems. it'd be neat.
here are some outputs i generated:
(one of the things that i realized pretty quick was that what i wanted was... each magic school as having a messy cluster of 'domains'. i started with 'light' and 'dark' and very rapidly realized the words and concepts i were using were actually from several distinct subclusters. so i broke them apart into a set of relations: 'light' as in actual light, associated with 'sun' and 'sacredness' and 'purity'; 'dark' as in shadow, associated with 'moon' and 'occult' and 'decay'. and it'll first pick one domain and either a random associated domain or the opposing domain, and during generation it can generate 'mystery' events that would pick a new associated/opposing domain and start to generate spells of that domain.
this has an issue where it doesn't really handle comprehensive science-type magic very well -- there would be no magic schools about converting earth air water and fire magic into each other. it's always got a specific elemental flavor. that's something i'd have to think about how it would work.)
the other thing i worked on, for the first half of october, was input stuff again. kind of a continuation of this thing. i'm using the reactive-banana FRP framework still, which, is okay. as mentioned previously, there's a bit of an issue with the rendering -- my understanding is that the event network (which is its own monad thing) expects to effectively be the 'main loop' of the program, and handle all input (via handlers firing events) and all output (via an event stream of
IO ()
). but that's not feasible for real-time rendering, andgpipe
does all its render stuff inside a different rendering monad that can't really be converted to anIO ()
action. so i'm not entirely sure how i'm gonna get everything to work together yet.that being said, this two-week section went a lot better than the prior two-week section. the goal was to figure out automatic synthesis of input handlers, so that i could just make some inert data type like a
Menu a
orForm a
or whatever, and populate that as desired, and then hand that over to get anEvent a
out of it.let's talk about input handlers for a bit
on the lowest level, GLFW provides callbacks for raw input events. these are things like 'key pressed' or 'cursor moved' or 'mouse button clicked'. there is only minimal state recorded (only the control keys -- shift, alt, ctrl, etc), so even if you want to know where the cursor was clicked then you need to maintain your own state, because as far as GLFW is concerned 'mouse click' is wholly separate from 'cursor position'.
so last time i put together some slightly more structured events, most notably click events that tell where they were clicked at. but. ultimately you don't want to be thinking at the level of individual input events at all. in my previous attempts at handling input, i generally had one gigantic handler that took the raw events and did stuff with them according to game state. stuff like 'did we get an up/down arrow press?' 'are we in a menu?' 'if we are, and we got an up arrow, and the index of the menu isn't already the top, then move the index up one' 'otherwise if we got a down arrow, and the index of the menu isn't already the bottom, then move the index down one' and so forth and so on for every possible action. if the game is in a certain state -- menu, world movement, paused, dialog box, shop screen, config screen -- then that state needed to be recorded and shoved into the scope of the input handler, and then the input handler would look at its available state and do whatever it needed to do, given the state and the event.
i would also generally have to manually construct all the menu layouts. if it was something easy like a list of options, that was pretty easy to automate. but something like an options screen, with lots of different labels and toggles and selection fields, i would have to literally got, for every single thing, "okay this is at coordinates
x,y
and it extendsw,h
and when it's clicked it should set this state", over and over and over again, and test a bunch to make sure i hadn't messed up and rendered overlapping items. it was a giant hassle.for this, in haskell, i really wanted to avoid that entire nightmare. i wanted the program to do all that for me.
think about filling out a web form: you might click on some checkboxes, click on a select box, mouse down through the drop-down box, click again to select an option, click on a text field to focus it, type some text in, click on a separate text field to focus it, type different text, click on a radio button, click up through a number field, mousewheel scroll to get to the right number, and finally click on a submit button. there are a lot of low-level input events happening there, and none of them really escape the context of "making a form". so instead of thinking about each raw input event as it comes why not restructure things so that you're expecting an input stream of the form itself? when the form is done (or cancelled) it'll emit an event, and until then all the low-level input will be automatically handled by its own thing, and i as the programmer writing more code wouldn't have to ever really touch any of the input events or think about what they're doing.
what that actually means is writing code that would 1. auto-generate a layout for a given ui thing, maybe with some hinting 2. manage internal state like 'is this checkbox checked', and suitably generate render update actions when anything is selected or deselected or w/e, and 3. synthesize a new event stream from the raw input event stream. what this accomplishes in haskell is that it takes user input and treats it like any other kind of value -- a
Form a
is aFunctor
and can be mapped over and changed and composed and ultimately extracted into an event stream, which means that it's possible to treat these values as 'haskelly' values that you can reason about, instead of ephemeral things that are piecemeal assembed inside a handwritten event handler.this kind of thing is basically entry-level for actual gamedev toolkits, but uh i've never actually gotten around to doing it. mostly because i haven't really been using haskell for realtime stuff until very recently.
anyway i did it, is the short version. i vaguely remembered using
reform
as part of happstack for my web server, and they have a form type that is an applicative. i also remembered they mentioned thatreform
was based off ofdigestive-functors
, which was a simpler implementation (reform
has a whole proof types thing that i don't need, at least yet) so i decided to check out how they did it. this lead to some interesting hacks.digestive-functors defines a FormTree type that does a wild GADT hack.
so a big part of the usability of this is that we want to have an Applicative instance for these types, right? so we can say
(+) <$> number 0 10 <*> number 0 10
or whatever, and have a Form that runs that code when it's evaluated, without the programmer having to really care about which numbers were selected or where exactly every pixel was clicked. and that presents a problem, since to work as a form element, it needs to cache its original constructor. this is what i tend to think of as the heterogenous infinite type problem: if we have a typeFoo b a
that stores a value ofb
and can convert it to typea
, then anything that stores aFoo b a
needs to include the type variablesa
andb
in its type signature, even if we only ever pull ana
out of it. this means it's impossible to make that type do anything useful, because the types will never line up (e.g., can't instance Functor b/c it's notFoo b a -> Foo b c
, it would have to be likeFoo b a -> Foo (Foo b a) c
, which is not... robust. consequently any type that has to cache its transformations is pretty impossible.except you can use existential quantification to avoid that, by just hiding the type variable:
data Deferred a = forall b. Deferred b (b -> a) run :: Deferred a -> a run (Deferred b f) = f b instance Functor Deferred where fmap f (Deferred b g) = Deferred b (f . g)
which is actually the first time i've actually seen a use for existential quantification.
this gets subsumed under GADTs:
data Deferred a where Deferred :: b -> (b -> a) -> Deferred a
and that also solved a problem i was having with checkboxes. so a checkbox constructor should always return a
[a]
value, right? like that's how checkboxes work. but without GADTs there's no way to force a constructor like that.so my type ended up looking like
data Form a where Checkboxes :: Eq a => [(String, a)] -> (a -> Bool) -> Form [a] Radios :: [(String, a)] -> Int -> Form a Textbox :: (String -> Either String a) -> Int -> String -> a -> Form a EnumSet :: (Enum a, Show a, Read a) => a -> a -> a -> a -> Form a
to represent your basic ui elements. the thing with that is, well, that's not a Functor, right? imagine running
fmap head
onCheckboxes [("foo", 1), ("bar", 2), ("baz", 3)] (const False)
.Checkboxes
can only construct aForm [a]
, so you can't reconstruct the fmapped value withCheckboxes
in the same way that you could withRadios
.this is where we cheat:
Pure :: a -> Form a App :: Form (x -> a) -> Form x -> Form a
now you can implement
Functor
(andApplicative
) by just shoving everything into theApp
constructor:instance Functor Form where fmap f v = App (pure f) x instance Applicative Form where pure x = Pure x (<*>) f v = App f v
and then when you run it, you can extract those functions and evaluate them.
but wait, i hear you say. that's not a valid Applicative instance! it breaks the Applicative laws!
pure id <*> v === v
no longer holds! but. doesn't it? after all in every possible evaluation path the same value will be generated. sure, internally it's anApp (pure id) v
constructor instead of av
constructor, but like, internally in haskellid . f
would be a different thunk thanf
, so...(actually that's maybe not true; i am not really that familiar with how haskell thunks work. but you get the idea.)
anyway the actual function is pretty messy due to the way rendering interacts with events -- the full type is
buildForm :: MonadMoment m => RenderState os -> Event FauxGLFW -> Form a -> ([RenderUpdate os], m (Event a, Event [RenderUpdate os]))
, which returns 1. a list of initial render actions, which need to be rendered when the form is first displayed, and 2. a tuple of event streams in the FRP monad, one corresponding to form submissions and the other corresponding to intermediate render updates (e.g., text being typed or checkboxes/radios being selected or deselected). in actuality it's still super unfinished, since i only wrote rendering/behavior handlers for the checkbox constructor. text input is a mess.anyway i'm feeling tentatively positive about this. i still need to figure out how to actually use that
Event a
value to do things; right now i'm just debug printing it, but presumably the game would be in a state where it's prepared to do something with thea
events it recieves. and i still have no clue how to selectively enable/disable forms, in practice. so opening and closing ui boxes. still a very primitive thing.anyway i'm sure i'll figure it out at some point.
- Sep. 16th, 2018
-
Tags:
okay gamedev challenge reportback time
this time what i wanted to do was get some 'open' l-systems working ('open' is a technical word in the academic literature that means capable of communicating with an external system to guide the expansion of the system, such as e.g., plants reporting the light level of their leaves or the water level of their roots)
things didn't go great. i spent the first week and a half trying to get the l-system left and right contexts calculated correctly, so that was pretty frustrating, especially given the simplicity of the final solution.
okay so an l-system is a parallel rewriting system, which means that it uses a series of rules to convert a string into another string (that's the 'rewriting' part) and that when it does the conversion it converts all characters at once (that's the 'parallel' part). simple l-systems have rules likeK → K-K++K-K
, which is, given aK
turn it into the sequenceK-K++K-K
. more advanced l-systems involve parameters and predicates and stuff like that. they also include contexts. so instead of just sayingA → AB
(replace all As with AB) they say things likeA < A > B → CAB
, which means 'for everyA
that is preceded by anA
and a followed by aB
, replace it withCAB
'. figuring out what's 'before' and whats 'after' each symbol is initially trivial, but nearly ever l-system is what is called a bracketed l-system, which uses symbols like[
and]
to permit branches. so a string likeF[-F]+F
is a split branch, where one branch runs-F
and the other 'main' branch runs+F
this means that 'before' and 'after' symbols get more complex to calculate, since the strings that are linearly before a given symbol might not actually be before it within the tree, and the strings that are linearly after a given symbol might not actually be after it. and also, the symbols that are after it might be a tree of symbols, rather than only a string. (since forA[B]C
theA
is followed by bothB
andC
.) in the literature, these things are called the 'left context' (the string of things before it) and the 'right context' (the tree of things after it)
i only actually implemented one l-system this entire time, which was a system from a paper that basically shows a simple system that imitates a plant that sends out a pulse to its leaves to control how they grow, and the leaves send a signal back to the root telling it how much energy it has*. to do that it uses some fairly involved predicates on left and right contexts.
anyway so the first working setup i got is here and after that i got some more advanced renders of that same system working, plus a deck of systems with tweaked angle values that lead to different trees. not all that exciting, though it did get me closer to some more robust rendering primitives. now i can render arbitrary prismatoids at arbitrary rotations! which was not a thing i could do before.
did not actually get open systems working, which kinda sucked, but at least i'm in a much better place to get them done at some point later since i have most of the infrastructure i'll need.
---
*kind of
- Aug. 30th, 2018
-
Tags:
okay it's getting to the end of the month so it's time for the two week project reportback.
uhhhh this one went a lot better than the last one, let's say.
so the goal for this was "landscape generator", since that's something i've wanted to do for ages but never really gotten around to in a way that i was pleased with.
what i have right now is...
( images )
which i would say looks pretty good. like this was definitely made much easier by LEVERAGING my EXISTING TOOLBASE -- i had the graph-grammar expander from old code; and the hex shape embedder a prior project; and the basic geometry setup from the house renderer, which was itself a revision of the svg renderer from another project; and the random vegetation was generated via my possibility space code that i've been working on for a real long time. so i was able to do this in two weeks because i spent lots of other two week projects building the foundation.
there's a lot of other stuff i'd like to add to this (roads, rivers, coherent watersheds, an underground, villages and dungeons, etc) but just as a start i would call this pretty decent.
since i figure this code might actually be of interest to other people i also put the code up on github for people to take a look at.
also what this did was really... put into focus everything that goes into making a game? like oof here's all this 3d geometry, but if i want any kind of coherent engine i also need collision code so it's possible to move things around in the space, and ui/picking code so i can determine what tiles were selected when somebody clicks on a given pixel, and input handlers so that actions can have meaningful effects, and also a whole different rendering setup so that it's possible to dynamically alter the world geometry without needing to regenerate every single vertex in the entire game world. what they say about graphics making a game vastly more complex is extremely true.
anyway we'll see what the next two week project might be, who knows. i still haven't entirely made up my mind what i want it to be.
- Aug. 14th, 2018
-
Tags:
it's the 14th so time for a gamedev thing reportback. this time it was... well it ended up being a ui thing, and i didn't really get much accomplished. i was going to do some landscape geometry stuff, but before that i wanted to get input handlers sorted, since i have lots of bad memories of manually writing input handlers with lots of manually-tracked state and it just being a gigantic depressing hassle. so i was like, hey i'll put together some input wrappers and it'll be great, no more thinking about low-level input junk.
that is not how things went. i ended up just kinda building up a mess of mostly-broken frameworks, where none of them actually worked either in practice or in theory. near the end i was like "okay this is a mess and i need to actually find an input library somebody else made", and i ended up with reactive-banana, which was... i mean, i still don't have it working, in part because it doesn't play super well withgpipe
, or raw opengl rendering generally. (when doing rendering you really don't want to write to gpu buffers at random intervals, directly in response to input received; you want to do something like flag things changed as dirty and only actually change buffers once, right before rendering. this is fairly difficult to do with reactive-banana, and gpipe itself wanting things to happen in aContext
and not in rawIO
makes things considerably trickier.)
so i don't actually have any kind of finished product at this point, but i did throw together some ui/action code that i think is a fairly solid foundation to build on later. if i finish it. but ugh, input, i'm sick of it.
next two weeks: actually doing landscape geometry synthesis, because i'm really sick of internals stuff and i'm ready to get back to rendering things that look nice
- Jul. 31st, 2018
-
Tags:
gamedev reportback: gpipe renderer
end of the month so that means it's two-week project reportback time.
this two weeks, as you might have surmised, was "use gpipe to set up a better rendering environment". it went okay.
lambdacube had been pretty frustrating because it was very bespoke and had poor documentation and was also broken in some important ways. gpipe is a lot more robust, which is nice, since i can mostly stop thinking about uniforms and texture names and binding values and the like and just think about how to generate geometry. though a definite goal moving forward with this is getting robust geometry tools so i no longer have to think about generating geometry and i can focus on just making stuff. CSG here i come. hopefully.
anyway so what i actually accomplished is: a basic render setup, with the ability to load and bind textures, rotate the camera around, and (provisionally) print text. i also wrote part of a CSG setup to subtract one shape from another shape; we'll see how that goes. the next big hurdle is interface/interactivity; i'm thinking about writing some basic MVC kinda setup with primitives like 'ui item' 'menu' 'keypress' etc, so i can mostly stop manually positioning ui and hardcoding in input handlers. we'll see how that goes.
that being said my next project is probably gonna be... more rendering stuff? all these projects are kinda getting tied together at this point, so maybe what i'll work on next will be turning graphs into generated landscapes, since as nice as all of these floating houses or plants or w/e have been, really i've just been itching to generate an actual space.
- Jul. 24th, 2018
-
Tags:
gpipe part 1
anyway in other news i got gpipe working. i'll make a more comprehensive post at the end of the month but for now all i'll say is just... ughhhhh managing render data is a nightmare, constantly. i have polygons rendering but now it's time to think about textures and how to manage input and ui and how to draw text. html has really spoiled me in that regard, in that rendering text there is just... rendering text. and it's only after all that stuff is settled that i can start thinking about what i want any of this stuff to actually do. there's just not the infrastructure to make anything that does anything yet.
- Jul. 14th, 2018
-
Tags:
gamedev reportback: villager stats generator
i didn't actually get much done for this latest two-week project, which has kinda been bothering me. feeling pretty lethargic about everything. kinda sucks!
anyway basically the only thing i did was this, before running out of steam. the problem with having these game project goals is that, okay, realistically two weeks is not enough time to make a big complex game. but if i can cut the work into two-week-sized chunks, then i can work on that and feel accomplished and have an important chunk of work done. but if i fail to do that, well, it just feels like i'll spend two weeks working on a bad prototype that will never really matter in the end, and that's not exactly a thing that gets a person inspired and eager to work on the thing. so that's not great.
(anyway this was going to be some kinda town-builder inspired by the breath of fire 3 fairie village, where all your townspeople just have a random name and three random stats. here i've added error bars, so that people do work not just in a 0-100 range but also at a random point on their error bar. also there are these specific subclasses which are only available if a given person has that class available. anyway none of it really came together so all that i really did was make the stat roller.)
- Jun. 29th, 2018
-
Tags:
okay and time for the second post about the myth/history generator
it went... alright.
here's the latest history file and here's the data file it was generated from. at this point the biggest issue is a data/structure one: it's easy to write data files that don't really generate interesting or coherent histories. and like, it's nice to get to that point i guess, instead of being stuck on "this can't do much because the code isn't done". so, one of the issues here is just that the development actions don't really mean much since most of them are never used, and they can't change a culture's physical appearance in any way. since wasteland beasts are just spontaneously generated, an enormous amount of the history events are wasteland beasts being generated and exploring new regions which in turn generate new wasteland beasts. heroes exist, but they don't really do anything aside from wander around.
stuff like that.
that being said, it's still nice to have the code in a usable state, even if it's not ideal. this dataset ultimately needs more writing, and the code alterations are more like "add hooks to generate random vegetation from my random plant definition" "add hooks to generate random landscape connections from my graph-expansion code" "add hooks to generate random house styles from my house-rendering code", and so forth. basically making it multimedia instead of just text. so that might be a thing to handle later on, but where it's at right now is definitely usable, even if it's not the most interesting. it's a working prototype that's not too frustrating to use, and now just needs some refinements and a lot more data.
the gamedev index post has been updated
- Jun. 15th, 2018
-
Tags:
so, this two week project was the myth/history generator (again). this is a thing i've been revisiting for a while now! and it's kind of frustrating because...
okay so one of the plans i have for all of these two-week procedural things i'm making is to stitch them together into one big project (or several moderately-sized projects), and in most of those plans this setup is the glue. procedural buildings are fine but they only make sense in a world, in the context of other buildings, in the context of settlements. procedural plants are fine, but where do they grow and what properties do they have (dye, medicine, spice) and which people know about them. procedural stuff is just stuff unless it's put together in some kind of context.
so, enter the history generator, a thing for associating things together and creating a shared context.
so this is kind of a Big Thing that i've wanted to get working for a really long time (i mean, it was the very first two-week project, and that project was 'revisit this generation code i wrote a year ago', so this code goes back a long way at this point), and as i get closer to getting it working, it gets more frustrating? because... it becomes clear how much is left to do, or because even with various code parts of the system in place there's the much bigger question of how to construct the dataset. finishing a big part of it and then going "okay yes but my output is still pretty garbage, so this is a big thing i'm emotionally invested it and it sucks and that kinda hurts". etc.
except hey this is still a lot of progress, and even if internally i'm still frustrated, it's important to, you know, acknowledge that this is in fact a lot of progress and making some big steps towards it being good. before you have something good you have something kinda terrible; that's just how making things goes.
here's some outputs:
so the thing about this is... like even the final product isn't that impressive, in terms of code? it's really simple. but the big accomplishment here isn't so much the thing itself so much as the method of generating it. most of my generation stuff is pretty, uh, coderly. i want to tweak the output i need to change some data definitions in source files and rebuild the project. it's all hardcoded. this myth generator though is data-backed. the system itself is really just a parser and an evaluator, and all the actual stuff is in a data file.
this data file, specifically.
the program is run like./dist/build/myth/myth data/desert2 --generate UNIVERSE -c1 --action _ -l50 > testoutputXXX.html
and then that's it. it generates one 'universe' instance (from theUNIVERSE
and-c1
), i tell it the running action generators are called_
, and i tell it to generate 50 events. currently the data i'm using isn't very complex, but like... it's generating a random landscape and filling it with random features, and actors are randomly moving around inside that. and none of that's hardcoded; it's all just from evaluating that data file. so i'm pretty proud of that.
anyway in related news, this two-week project is up, but in an UNPRECEDENTED MOVE i'm gonna keep working on this same thing for the next two weeks, because uhhhh this is finally getting to the point where the generator is Doing Stuff and i'd like to get it generating some stuff that's more legitimately impressive, rather than just proofs of concept.
also there's this whole other post i need to write at some point about random generation, but i'll save that for later. short version: right now i'm using a dumb random generator instead of my fancy possibility space code, because my fancy possibility space code wasn't fast enough due to various reasons. i have what i think are pretty concrete ways to fix it and make it way better and expand the functionality of this setup substantially just by allowing much better control of random values, but currently none of that is in place and so the randomness in the generated outputs can be kinda whack.