let it leave me like a long breath

let it dissipate or fade in the background

Entries tagged with programming

Profile

xax: purple-orange {11/3 knotwork star, pointed down (Default)
howling howling howling

Nav

  • Recent Entries
  • Archive
  • Reading
  • Tags
  • Memories
  • Profile

Tags

  • art - 2 uses
  • asteroid garden - 4 uses
  • code - 19 uses
  • demos - 1 use
  • dreams - 5 uses
  • ff7 fangame - 23 uses
  • fic prompts - 13 uses
  • gamedev challenge - 82 uses
  • hell game - 76 uses
  • nanowrimo - 11 uses
  • plants - 9 uses
  • process - 52 uses
  • programming - 51 uses
  • screenshots - 5 uses
  • writing log - 83 uses

May 2025

S M T W T F S
    123
45678 910
1112131415 1617
18192021222324
25262728293031
  • Dec. 5th, 2023
  • xax: purple-orange {11/3 knotwork star, pointed down (Default)
    Tags:
    • code,
    • programming
    posted @ 03:33 pm

    anyway here's the actual math if anybody wants to help me with this :V

    so i have some stored vertices for a prism going from 0,0,0 to 0,1,0. it's a rectangular prism 0.2 units thick and 1 unit long. fine, whatever.

    i want to reposition it so that it's actually positioned between two arbitrary points, p1 and p2. the way i'm doing that currently is like this:

    const at = (vec) => {
        const base = new Point3d (0,1,0);
        const axis = base.cross (vec.normalize());
    
        const angle = base.vectorAngle (vec);
    
        return axisAngle (axis, angle).multiply(scale (1, vec.magnitude(), 1));
    };

    where

    
    // returns angle in range 0..pi
    Point3d.prototype.vectorAngle = function (vec) {
        const this_ = this.normalize();
        const vec_ = vec.normalize();
        const n = vec_.cross(this_);
    
        return Math.atan2 (n.dot(n), this_.dot (vec_));
    };
    
    function axisAngle (axis, angle) {
        const sin_a = Math.sin (angle / 2);
        const cos_a = Math.cos (angle / 2);
        const axis_ = axis.normalize();
    
        return new Quaternion (axis_.x * sin_a, axis_.y * sin_a, axis_.z * sin_a, cos_a).normalize().toMatrix();
    }

    (i mean i'm not sure that all the quaternion & vector math is right but if i got cross products and quaternion->matrix code wrong i assume it would've cropped up before now)

    the fundamental thing is that i assume given two points like...
    let a = new Point3d (0,1,0).normalize();
    let b = new Point3d (1,2,3).normalize();

    then something like m = axisAngle (a.cross(b), a.vectorAngle(b)) should generate a matrix where m.multiplyVector(a, 1) equals b

    when in fact!!! if i do it with this code & these values i get
    b = Object { x: 0.2672612419124244, y: 0.5345224838248488, z: 0.8017837257372732 }
    
    m.multiplyVector(a, 1) = Object { x: 0.25318484177091666, y: 0.5991446895152781, z: 0.7595545253127499 }


    so very close but not actually the same. it seems like they should be the same.

    (also there's a secondary issue where i know... the at code there is underspecified, because you need some spare coordinate axes to determine how points in the plane of vec get positioned, since all you have so far is the y axis transform; you need at least an x/z one too. i think this is a separate issue than the above but to be fair i haven't tried to fix it yet.)

    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
  • Jan. 13th, 2021
  • xax: purple-orange {11/3 knotwork star, pointed down (Default)
    Tags:
    • gamedev challenge,
    • programming
    posted @ 12:31 pm

    okay it's been a while. uhhh so i moved across the country in october so that was exciting. also you know, everything else that's happened in the US since then has also been exciting.

    anyway i haven't really kept up on my two-week projects but i have done some stuff so i might as well post about it now.

    i didn't really do anything code-wise for october or november, since, moving and then the election had a way of making me real unable to focus on anything. early december i was gonna do procjam 2020 but i ended up not really getting much of anything done. i did some more stuff with svg 3d renders:

    • boxspin #1
    • boxspin #2
    • boxspin #3
    • boxspin #4
    • boxspin #5


    (this is a mostly-accurate implementation of a BSP-tree based polygon-cutting algorithm for running painter's algorithm to depth-sort 3d geometry.)

    but that never really got past the point of an early demo




    after that i mostly played a bunch of 'per aspera' and immediately got frustrated with how simplistic their terraforming model was so i was like "i'll make my own terraforming game"

    so i threw together some parts of that in the later half of december -- fixed up some of my polyhedra code slightly and spent a lot of time digging into GPipe to try to get framebuffer-based picking working. previously all my picking had been geometric: raycasting through a map grid and doing geometric calculations to see what was hit, and using that data to determine what if anything was under the mouse cursor.

    framebuffer-based picking is when you render the screen a second time, but with everything that can be picked (as in selected by the cursor) drawn a different color. then you can just look up what color a given pixel is in that framebuffer to see if there's anything to click on or hover over. this means no geometry aside from your usual camera transforms, but it does mean a more complex shader/rendering environment. GPipe actually had a framebuffer-reading glitch in its library code i had to fix before i could get things working. it took a while. but anyway i got it so you can properly pick across a planetoid that i'm rendering, which is a good first start for a planet-based terraforming game.



    then in the first half of january i started hooking up actual UI -- there's currently no real "game model", but you can put down an initial building ("seed factory") and then use a menu to put down other constructions. i'm going to need to totally restructure my layout code, since it really wasn't designed for flexibility or robustness, so even doing something like "make a horizontal menu" is a whole mess of code that could be a lot simpler.

    but i'm feeling kind of burnt out on that and i'm about to start working on a different 2wk project, so i figured i should probably write a post here since it's been a while.

    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
  • Oct. 5th, 2020
  • xax: purple-orange {11/3 knotwork star, pointed down ({11/3}sided)
    Tags:
    • gamedev challenge,
    • programming
    posted @ 12:11 pm

    procedural skybox

    so, two week projects

    this one was actually a pretty big success! it's been hard to uhhh schedule anything these days, and going forward things are likely gonna be a mess for at least another month or two b/c i'm moving. but i wanted to get something nice and visual to show off, and one of the things i've been wanting to do for a while is shaders.

    so i made a skybox shader for my game!


    first let's maybe take a diversion into what a shader is, because people (well, ~gamers~) hear the word a lot but it's maybe not immediately obvious what exactly they are, aside from 'something to do with graphics'. you can skip this if you already know what a shader is.

    digital rendering is fundamentally about polygons, right? drawing triangles on the screen. that's a several-step process: placing some points in 3d space and connecting them together to make a triangle, figuring out what that triangle would look like on the 2d plane of the screen, and then filling in every visible pixel of that triangle with some appropriate color. these are all fairly tricky operations, and they're generally handled by the rendering library you're using.

    (this is a tangent, but for example, opengl has this property where edges are always seamless if their two endpoints are exactly the same, which is to say that the line equation they use to figure out which screen pixels are in a triangle will always be one side of the line or the other, with no gaps or overlaps, if you give it two triangles in the shape of a square with a diagonal line through it. they can seam if you generate geometry with 't-junctions', because then the endpoints aren't exactly the same-- this is something you have to keep in mind when generating geometry, & actually some of my map code does have seams because of the way i'm rendering edges.

    conversely, the old psx rendering architecture did not give this guarantee, which is why in a lot of psx games you can see the geometry flicker and seam apart with weird lines, which is where the 'is this pixel in this triangle or that one' test is failing for both triangles on what should be seamless geometry.)

    so this always involves a lot of really low-level math calculations -- when i say "figure out the position in 2d space", what that actually means is doing some matrix multiplication for every vertex to project it into screen space, and when i saw "fill in every visible pixel", that means doing a bunch of line scanning loops. these things were what the first GPUs were designed to optimize for.

    so, openly used to have what's called a fixed-function rendering pipeline: you basically had no control over what things happened during the rendering process, aside from some very basic data. for example, your 3d points came with a 3d vector attached that was its position. but you could also additionally attach another 3d point to each vector; this would be interpreted as a color: this gave you vertex-coloring. if you wanted to texture your triangles, instead of having them be a flat color, you could attach a 2d vector to each point; this would be interpreted as uv coordinates, or the place to sample from the currently-bound texture, and then it would multiply that sampled color by the vertex color to get the actual color. the final thing you could attach was another 3d vector, which would be interpreted as a vertex normal, and that would be used in the (fixed, hardcoded) opengl lighting calculations, to give you basic support for lighting techniques. there's a process in there that would look at the reference value for each pixel, and look at how they were positioned in space, and then interpolate between the values for each pixel. so this is how you got smooth shading, or working textures: because if you had one UV coordinate that was `0, 1` and another that was `1, 1` it would generate up some difference vectors and then while counting out each pixel it would provide that pixel with an interpolated UV, leading to different values (color, textures, lighting) getting sampled across each pixel. (actually iirc the opengl built-in lighting was gourad shading, not the more modern phong shading, and if you care about the difference you can look that up but this is already a really long shader explanation.)

    so what opengl supported, near the beginning was this: 3d points, colors, texture coords, lighting normals, and you could bind camera & perspective matrices, textures, and lights. those were the only options available, really.

    then opengl went up a few versions and decided to be more flexible. instead of all of those things being hardcoded into opengl itself, they decided, they would provide a programmable interface for people to write their own vertex and pixel code. this generalized everything here: instead of being able to send in only positions, or colors, or uvs, or normals, you could attach any kind of arbitrary information you wanted to a vertex, provided you could ultimately use it to output a position. and instead of pixels being rendered by applying these fixed transforms of vertex color times uv coordinate texture lookup sample times lighting angle, you could do anything you wanted with the per-pixel interpolated values, provided you could ultimately use it to output a final color.

    this blew things wide open. i remember back in the day when i had a computer old enough that it couldn't run shaders, and i couldn't run any newer games, and i was like, 'look shaders are all just fancy effects; i don't see why they can't just have a fallback mode'. and sure, if your shader is just a fancy blur effect you're probably still doing much the same kind of thing as the old opengl fixed pipeline: giving pixels colors based on base color and texture color and lighting level. but there's so much stuff that cannot possibly be replicated with the fixed-function pipeline. there's actually a really interesting talk by the thumper devs where they talk about how they make the track seamless by storing twist transforms in their shader code and applying them in their vertex shader, and doing most of their animations just by exposing a time value in their shaders (which is a very basic trick but it's one that was completely impossible in the old fixed pipeline).


    so i wanted to make a skybox shader, because so far i've just been drawing a flat blue void outside of world geometry. one of the things i've kinda been thinking about for a while is a game where you have to navigate by the stars at night (inspired of course by breath of fire 3's desert of death section), and that implies that you have 1. distinct stars and 2. an astronomically-accurate rotating skybox.

    so this is where the first bit of shader magic comes into it: i placed a quad at the far clipping plane, and then inverted my camera matrix so that i could push those values into world space and convert them into angular coordinates. by converting those to spherical coordinates, i could have a perfect spherical projection embedded into this flat quad, that would correctly respond to camera movements. keep that in mind for every screenshot i post about this: that this is a single flat quad, glued diorama-like to the far end of the camera; everything else is the projection math i'm doing per-pixel.

    so i queued up a bunch of posts on my screenshot tumblr; starting from here with the first attempts. (use the 'later' link).

    i wanted something stylized that matched the look of the rest of the game (so, aliased and triangular), so the first thing i wanted to figure out was how to draw a triangle onto the celestial sphere. this turned out to be pretty tricky, since shader code is pretty different from code you'd be writing elsewhere. like, if i wanted to draw a triangle polygon, i'd expect to calculate some points in space and then position lines between them. in a shader, you can't really do that -- or rather, it would be really inefficient. instead, you want an implicit equation, something that you can run in a fixed-form that gives you a value telling you how close the point is to the edge of the triangle, so that you can threshold that and get an actual triangle shape.

    i never actually got that working right, but i got it working kind of right and i figured, well, some distortion is fine.

    ultimately (as you can see if you page through all the posts) i got stars placed, constellation lines connected, the axis tilted, rotation happening, and even some very simple celestial object motion.

    the big flaws of this shader, currently, are that planets and moons are currently just flat circles. the major issue with that is that there are no moon phases, which looks weird as the moon advances relative to the sun in the plane of the ecliptic (the moon is up during the day when it's near a new moon, which irl we don't notice b/c the new moon is mostly invisible, but here it's still just a continual full moon). there's also no depth-testing the celestial bodies relative to each other; here i always draw the moon over the sun, and those are both always drawn over venus and mars, which are drawn over jupiter and saturn. so when planets are visible they're not actually properly occluding each other; when i was drawing things from in orbit around jupiter, i had to manually move jupiter up so that it would occlude the sun. there's also no 'eclipsing' code, which is an issue not just for solar eclipses, but also like, yeah on ganymede jupiter is continually eclipsing the sun, & between the 'moon phase' code and some 'eclipsing' code that should pretty radically change the look of the environment. but none of that is currently being handled at all.

    also i never did get the lighting angle correct -- at one point i got tired of the landscape being this fullbright shading regardless of the 'time of day' in the skybox, and so i pulled out some of the planetary calculations and just had the sun's position determine the light. this is another thing that's pretty hardcoded, right, since theoretically the moon should also be contributing light based on its phase, and if this is a jupiter-orbit situation where the sun is occluded then it should have its component cut out from the lighting calculation. not only am i not doing any of that, but i couldn't get the basic 'angle of the light in the sky' math right, so there's some weird displacement where the lighting doesn't really follow the sun all that accurately. i'm sure i'll figure it out at some point, and even just having any changing lighting looks great compared to what i had before.

    likewise i'm doing some 'day/night' calculations, but that's just slapping a blue filter over things and fading out the stars; i'd like to do something a little more in-depth.

    (also, since this game isn't gonna take place on earth i'd really like to add in some different stellar bodies -- more moons, different planets, that kind of thing. but that's its own project.)

    something else i discovered while working on this is that the gpipe shader code isn't very good. it aggressively inlines -- and i guess that makes sense because how could it not -- and that means that it generates hundreds or thousands of temporary variables. my shader code, which is fairly compact in haskell, undoubtably expands into thousands and thousands of lines of incoherent garbage in GLSL, which is an issue because that's the code that's actually getting run. somebody mentioned a vulkan library for shader generation which mentions the inlining issue, but i don't know if a similar technique can be ported back to gpipe, and gpipe itself isn't really being actively updated. there's no full replacement for gpipe around, but i guess if anybody does make a similar vulkan library for haskell i might be tempted to switch over, if only because the shader compilation is really bad. not unworkable, but probably i don't have an enormous amountof shader budget to work with, just because the transpilation into GLSL is so inefficient without some way to manage the inlining.

    anyway here are some references that i looked at while working on this:

    • the deth p sun art that was one of the original inspirations for this
    • a starfield shader tutorial that i used some techniques from, notably the dot-product noise function and also the layering (which is how i have big stars, small stars, and really small stars in the galactic band)
    • a shader tutorial that helped me w/ the line-drawing sections for the constellations
    • ofc i was doing all this on spherical geometry, not cartesian, so i needed to check wikipedia a bunch for how to do the equivalent coordinate operation: 'spherical coordinate system', common coordinate transformations, great-circle distance
    • some stackoverflow stuff: how to calculate distance from a point to a line segment on a sphere (this is used in the constellation lines, right, b/c the pixel is the point and the constellation line is the line segment)
    • i didn't want to go overboard w/ star simulation, but i did want some varying colors, so i looked at the blackbody radiation spectrum and eyeballed it to get a coloring function. i had this idea to mix in a second spectrum of 'fantasy' colors so that i might get green or purple stars or w/e too, but i never got around to that.
    • i wasn't actually sure if it was possible to write draw-a-polygon code in a shader until i saw this shader, which in drawing that triangle implements a general-purpose arbitrary-regular-polygon function. i couldn't copy it directly but it definitely helped guide me towards something that worked in gpipe + on a sphere
    • also a bunch of people in various discords and irc channels, b/c actually i'm not super great with math! when i was griping about the pinching at the poles due to lat/long, somebody offhandedly was like 'oh yeah try taking the sine of that' and it turned out that was exactly what i should do.

    all-in-all really proud of this! it's the first time i've done non-trivial shader work, and even though it still needs a lot of work it's a pretty good starting point. now the landscape looks way more like a landscape, and less like some geometry floating in a void!


    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
  • Mar. 3rd, 2020
  • xax: purple-orange {11/3 knotwork star, pointed down ({11/3}sided)
    Tags:
    • code,
    • programming
    posted @ 04:38 pm

    so i'm a big fan of the ctm ('connected textures mod') mod in minecraft. in large part because i use the greatwood texture pack, which has a bunch of ctm textures and really shows off what a little bit of variation can do.

    texture variation (and geometry variation) are huge things to have. vagrant story is a game that it turns out is a lot like minecraft -- every room is assembled out of meter-large block/halfblock collision (well, and stairs), and its textures are (generally) 32x32 pixels per meter. so only four times the density of vanilla minecraft's 16x16 textures, and a quarter of the fancy smooth 64x64 sphax textures that a lot of people use. but because vagrant story has a whole lot of texture variation (and more complex world geometry, though that turns out to not actually be as big a concern as you might think) vagrant story looks amazing even now, whereas minecraft looks, you know, kind of garbage. it turns out big fields of repeating textures look bad.

    so i really wanted to add more intricate texturing modes to the game i'm working on, since i feel like that'll be kind of critical for the visual style. there are a lot of different modes i want to support (automatic wang tilings would be great) but to start i decided to copy the simplest ctm modes:

    • fixed, which is a single texture
    • random, which is a list of textures with weights, to be picked from randomly
    • repeat, which is an x*y grid of textures that will be laid out in a repeating pattern

    the type i wrote for those initially was

    data Texturing
      = Fixed FilePath
      | Random [(Float, FilePath)]
      | Repeat Int Int [FilePath]
      deriving (Eq, Ord, Show, Read)
    which is pretty obvious how it works, right? i have a big hardcoded texture-loading section that looked like
      ...
      ,("sand_loose", "img/tiles/sand_loose.png")
      ,("sand_loose_side", "img/tiles/sand_loose_side.png")
    
      ,("sand_cracked", "img/tiles/sand_cracked.png")
      ,("sand_cracked_side", "img/tiles/sand_cracked_side.png")
      
      ,("sand_silty", "img/tiles/sand_silty.png")
      ,("sand_silty_side", "img/tiles/sand_silty_side.png")
      
      ,("loam_coarse", "img/tiles/loam_coarse.png")
      ,("loam_coarse_side", "img/tiles/loam_coarse_side.png")
      ...
    and then that would be trivially changed to
      ...
      ,("sand_loose", Fixed "img/tiles/sand_loose.png")
      ,("sand_loose_side", Fixed "img/tiles/sand_loose_side.png")
    
      ,("sand_cracked", Fixed "img/tiles/sand_cracked.png")
      ,("sand_cracked_side", Fixed "img/tiles/sand_cracked_side.png")
      
      ,("sand_silty", Fixed "img/tiles/sand_silty.png")
      ,("sand_silty_side", Fixed "img/tiles/sand_silty_side.png")
      
      ,("loam_coarse", Fixed "img/tiles/loam_coarse.png")
      ,("loam_coarse_side", Fixed "img/tiles/loam_coarse_side.png")
      ...
    and then i could add new textures from there.

    the thing was, just using that type above was kind of awkward? since one of the things i wanted to do was load up each image based on the given path, but without discarding the texturing information. but to anybody who's been writing haskell for any amount of time, the issue is obvious: what i actually want is to add a type parameter

    data Texturing a
      = Fixed a
      | Random Float [(Float, a)]
      | Repeat Int Int [a]
      deriving (Eq, Ord, Show, Read)
    
    instance Functor Texturing where
      fmap f t = case t of
        Fixed a -> Fixed $ f a
        Random t was -> Random t $ (fmap . fmap) f was
        Repeating w h as -> Repeating w h $ fmap f as
    and now i can fmap things through the texturing data without losing the texturing information. this is a concrete example of something having a "functor context" that's invariant under fmap (other examples are 'fmapping a list or a tree can't reorder items in the list or tree').

    except there's still an issue, since what i really want to do is mapM -- if i have [Texturing FilePath], i don't want to end up with [Texturing (IO Image)] after doing the file loading, i want IO [Texturing Image]. that's a different function, traverse, which is part of the Traversible typeclass.

    this is kind of like, warming up to this texturing data being a Type that has typeclasses, not just some tagged enum: Functor says it can be mapped over, Foldable says it can be iterated through, and Traversable extends that traversal to being able to pull it 'inside out' to do precisely that kind of sequencing above -- do a file load for each thing, but pull them all out into a single wrapper as it happens so that the IO ends up 'outside' the Texturing.

    so i went and instanced Foldable and Traversable, and then i was thinking, well, what other common typeclasses should i be thinking about? it's not Applicative or Monad since it can't really combine two things, and it's definitely not Semigroup or Monoid for the same reason. since it's not Applicative it can't be Alternative, and since it's not Monad it can't be MonadPlus. that basically covers all the common typeclasses, so i guess i'm done.

    the thing with writing types is that it's generally possible to explicitly make a type into a valid typeclass instance, right? Semigroup just means "can be added together", and you can trivially implement that by adding in a new constructor like Several [Texturing a]. it's just like, well, is that really useful? given what the typeclass means, does that make some kind of sense? like i could say, maybe, that it's a semigroup in that adding two values together just dumps their values into a Random constructor, but then i'd have to make up weights, and that works well enough if the values being combined are Fixed or Random already, but Repeat would just make a huge mess, etc. so Semigroup doesn't really make sense. Monoid extends Semigroup with the addition of an null 'empty' object, so that would mean like, "totally untextured object" i guess, which is something that we actively want to avoid having!

    but Applicative, hmm, that would maybe be useful.

    looking at the ctm mod's config files (that's optifine's impl but it's the same config format) reveals a kind of pattern: there are 'base level' methods that do something -- ctm, ctm_compact, fixed, random, repeat, vertical, horizontal, top, and overlay -- but then there are also a bunch of methods that only exist to say "run two of these methods together": horizontal+vertical, vertical+horizontal, overlay_ctm, overlay_random, overlay_repeat, and overlay_fixed. what the mod lacks is a general-purpose way to say "use this one texturing method and then use this other texturing method on top"; they've had to special-case in each combination with its own mode.

    Applicative has two functions: pure :: a -> f a, which here is trivially just Fixed, and (<*>) :: f (a -> b) -> f a -> f b, which has to do with combining two values together. so if i want to combine two texturing values together...

    data Texturing a
      = Fixed a
      | Random Float [(Float, Texturing a)]
      | Repeating Int Int [Texturing a]
      deriving (Eq, Ord, Show, Read)

    note that Random and Repeating now take a list of Texturing a, rather than just a. this makes Fixed into a kind of 'terminal value', since the other two constructors will recurse to contain more Texturing as; it's only Fixed that will stop and finally produce an a. this two type is still a Functor, Foldable, and Traversable, but it's now also an Applicative:

    instance Applicative Texturing where
      pure a = Fixed a
      tf <*> ta = case tf of
        Fixed f -> f <$> ta
        Random tw wfs -> Random tw $ second (<*> ta) <$> wfs
        Repeating w h fs -> Repeating w h $ (<*> ta) <$> fs

    this expresses how it's possible to nest these values arbitrarily, which it turns out is the general-case version of what the ctm mod has to special-case. here, i can say 'random + repeating' to make a repeating brick pattern that's randomly interrupted by a flagstone:

    Random 1
      [ (0.875, Repeating 2 2
        [ Fixed "tile_bricks_bl.png"
        , Fixed "tile_bricks_br.png"
        , Fixed "tile_bricks_tl.png"
        , Fixed "tile_bricks_tr.png"
        ]
      , (0.125, Fixed "tile_flagstone.png"
      ]
    done the other way, it's 'repeating + random', which presents a repeating brick pattern with a random chance of each section having a variant:
    Repeating 2 2
      [ Random 1
        [ (0.75, Fixed "tile_bricks_bl.png")
        , (0.25, Fixed "tile_bricks_bl_cracked.png")
        ]
      , Random 1
        [ (0.75, Fixed "tile_bricks_br.png")
        , (0.25, Fixed "tile_bricks_br_cracked.png")
        ]
      , Random 1
        [ (0.75, Fixed "tile_bricks_tl.png")
        , (0.25, Fixed "tile_bricks_tl_cracked.png")
        ]
      , Random 1
        [ (0.75, Fixed "tile_bricks_tr.png")
        , (0.25, Fixed "tile_bricks_tr_cracked.png")
        ]
      ]

    when i add in the other texturing modes like ctm or overlay, those would just be more constructors that take Texturing a values, and would combine with each other easily and arbitrarily. all these examples don't even use <*>, they just take advantage of the general restructing afforded to them by the Applicative rewrite.

    i think this, more than anything, is the good part of haskell? its typeclasses are actually useful because a lot of them represent general-purpose data transformations, and that leads you to thinking "would this transformation be useful to have here?", and a lot of the time the answer to that is 'yes'. the conceptual pattern for organizing these values was always out there, but it took haskell being like, "hey this is what a Functor is, this is what an Applicative is" for me to consciously realize that was maybe a target i should be reaching for

    the final type (as of now) is this:

    data Texturing a
      = Fixed a
      | Random Float [(Float, Texturing a)]
      | Repeating Int Int [Texturing a]
      deriving (Eq, Ord, Show, Read)
    
    instance Functor Texturing where
      fmap f t = case t of
        Fixed a -> Fixed $ f a
        Random tw was -> Random tw $ (fmap . fmap . fmap) f was
        Repeating w h as -> Repeating w h $ (fmap . fmap) f as
    
    instance Foldable Texturing where
      foldMap f t = case t of
        Fixed a -> f a
        Random _ was -> (foldMap . foldMap) f $ snd <$> was
        Repeating _ _ as -> (foldMap . foldMap) f as
    
    instance Traversable Texturing where
      sequenceA t = case t of
        Fixed a -> Fixed <$> a
        Random tw was -> Random tw <$> sequenceA (fmap (sequenceA . fmap sequenceA) was)
        Repeating w h as -> Repeating w h <$> sequenceA (sequenceA <$> as)
    
    instance Applicative Texturing where
      pure a = Fixed a
      tf <*> ta = case tf of
        Fixed f -> f <$> ta
        Random tw wfs -> Random tw $ second (<*> ta) <$> wfs
        Repeating w h fs -> Repeating w h $ (<*> ta) <$> fs
    which is a little frustrating to read (sequenceA (fmap (sequenceA . fmap sequenceA) was), you don't say) but it all works and very simply specifies the necessary information each tile type needs to be textured, and it shouldn't be particularly difficult to expand for future texturing types.


    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
  • Oct. 4th, 2019
  • xax: yellow-orange {7/2} knotwork star, pointed down (7sided)
    Tags:
    • gamedev challenge,
    • programming
    posted @ 12:14 pm

    gamedev reportback: polyhedra coordinates

    so i wanted to take a break from rendering stuff for a while and work on something more theoretical, and decided to try polyhedra coordinates

    the goal is to eventually get polyhedra coordinates working with my graph generator, so that instead of making flat 2d maps i can wrap them around the surface of a planet, and use a different set of graph expansions to simulate (in a super rough and simplified way) some geological processes that will end up generating an entire planetary map

    that being said, this two week chunk of time did not accomplish that. i have a good idea of where i went wrong and how to fix it, but, out of time so i'm gonna stop working on it for the time being.

    (the thing with coordinates on spheres, or polyhedra generally, is that there's curvature. this comes with two general classes of complications: one, the math can't be everywhere-the-same in the way a flat square grid is always going to be x±1, y±1 for adjacent tiles, everywhere, since due to the nature of a coordinate system the distortion gets pushed around so that it's more present at some locations than others; and two, since the polyhedra wraps around, there's the question of what that looks like in the coordinate system. a wraparound square grid is like a torus, in that you can just say "left and right edges connect; top and bottom edges connect", which leads to some very simple math like if x < 0, x + x_max; if x >= x_max, x - x_max to perform the wraparound. on a polyhedra things like orientation come into play, where directions don't mean the same thing after crossing an edge.)

    i had this whole idea of using a winged-edge data structure (which i've been using for all those polyhedra renders) for tracking the structure of a world, and mixing coordinate math with graph-traversal code to figure out paths.

    so like, specifically, i'm interested in doing coordinate math on G(n,0) goldberg polyhedra. the existing libraries i've seen do this are there are earthgen and hexasphere, and they both handle things with a giant graph: each hexagon (or pentagon) tile is a node, and the six (or five) tiles it's adjacent to are the other nodes it has edges to, and they handle all pathfinding as a pure graph-traversal problem. that's fine for small shapes, but when you get to gigantic planetary-sized ones that's like, a five million node graph you have to keep in memory to do anything. also in a graph representation, there's no real sense of direction; you can't really say "go straight" since there's not a clear way to map edges to "opposite pairs".

    my idea had to do with the underlying shape: all G(n,0) polyhedra are fundamentally shaped like dodecahedra/icosahedra (they're duals so it's kind of the same shape). that means that no matter how big the polyhedra gets, it'll still be topologically the same; there will just be more tiles there, but the fundamental shape of its distance metric won't change. so instead of storing a million-part graph, you can just store an icosahedron graph, and that's actually enough.

    all that theory still seems sound, it's just when i got into the actual nitty-gritty coordinate math i ended up doing some math wrong and got impatient and kind of lost the thread and ended up with a mess that didn't come close to working. so that's a little disappointing, but, oh well, it's all just practice for the time when i actually get it right.

    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
  • Aug. 31st, 2019
  • xax: yellow-orange {7/2} knotwork star, pointed down (7sided)
    Tags:
    • gamedev challenge,
    • programming
    posted @ 10:14 pm

    so this 2 week project started out as "work on the rendering pipeline", since that's slow and bad, and ended up morphing into mostly being about texturing.

    the original plan was to do low-level rendering changes, stuff like doing in-place render updates, or just reuse of existing buffers rather than every update needing to be a full reallocation. the main problem was that doing a growth tick took like a third of a second due to the re-render, and that was just... not acceptable performance under any circumstance. but it turns out fixing that is difficult

    i did end up restructuring the code to use vertex indices (instead of duplicating vertices), and i split render updates into landscape updates and object updates, which did speed up the growth ticks a lot, but it didn't really solve the underlying problems.

    instead i got textures working. this involved a few things: writing a shader to do triangular texel mapping, changing the rendering code to produce correct uvs for all the world geometry, asset-loading code to load different textures, some tile data-handling to get their information into the right places where it could be used to figure out what texture they should use, texture-atlasing code to merge textures together so that i could merge geometry together into one big chunk that all shares the same gl texture, and some very rudimentary code to handle polyhedra "models" that generate the correct uvs for certain shapes. in the process of doing this, though, i did break (more like disable) the old lighting code, and i'm not sure what i'll do to replace it eventually.

    so like, that's a fair chunk of stuff that's important and needed to be done, but it's not quite the low-level graphics stuff i was anticipating working on.

    • tile uvs
    • less eye-searing uvs
    • triangular texels
    • texture atlasing
    • correct wall uvs
    • way faster growth ticks (anigif)
    • uvs for simple models

    this has been kind of funny, since the new textures kind of look terrible? so all this work has basically just made the game look less nice, until i draw some less bad textures and work out some ctm stuff so it's not all minecraft-style repeating tiles. it's a big technical achievement, though.


    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
  • Aug. 13th, 2019
  • xax: yellow-orange {7/2} knotwork star, pointed down (7sided)
    Tags:
    • code,
    • gamedev challenge,
    • programming
    posted @ 12:12 pm

    i ended up writing a todo list when i started and took some degree of day-by-day notes for this project, so i figured i might as well put together a writeup.

    august 1 - 14: PICKING

    ( it's long )

    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
  • Mar. 14th, 2019
  • xax: yellow-orange {7/2} knotwork star, pointed down (7sided)
    Tags:
    • gamedev challenge,
    • programming
    posted @ 05:19 pm

    okay so, graph code

    my two-week projects have been kind of erratic recently, or maybe i've been biting off increasingly more as i get more and more settled into doing them (i mean i have been doing them for more than a year now, which, is a while). lots of things that aren't really finished.

    anyway this time i wanted to improve my graph generator by adding 'loose' or 'stretchy' edges

    so in the abstract, a graph grammar system is incredibly flexible and very powerful. however, my implementation of graph grammars, and the specific expansions and embeddings that i'm using, are really quite limited, and they hamstring the system a bunch and limit me to only generating a pretty constrained subset of what's actually possible.

    the biggest stumbling block i've been having is the case of distant nodes. there are three parts to this: one, the embedding code needs to generate an embedding as the graph is expanded, rather than all at once at the end -- this is so impossible graphs are rejected instantly when they happen, rather than maybe staying around for a lot of further calculation before needing to be rejected; two, my embedding never moves already-placed nodes around, it only places new nodes -- this is because moving nodes arounds is very difficult to do algorithmically and would require a lot more code to do physically; and three, in this embedding all nodes with an edge between them need to be adjacent.

    what this means is that if a new dungeon graph is started with a start and an end room, then those rooms need to be adjacent, and no further graph expansion can push them apart. expansions might sever the link between the start and end rooms, for example to place a hallway between them, but that would just add a 'hallway' room that runs along the two rooms, not a hallway that pushes between them to separate them.

    (this also comes up if/when i get graph embedding on a larger graph working -- when i was working on polyhedra, it was pretty trivial to make a graph out of a polyhedra by assigning a node to each face and an edge to each edge, and then it could be possible to embedd e.g., a world map graph onto an arbitrary polyhedra. but that would have weird coverage (like the world map only spanning half of the actual world shape, or w/e) unless it was possible to kind of knit a 'minimal' graph across the entire surface, say by generating a subdivided polyhedra and then using the original, non-subdivided polyhedra as a graph overlay, and then expand from there. but that would generally require having nodes in the knitted graph that have edges between them, despite not representing faces that are edge-adjacent on the actual subdivided polyhedra.)

    so what i really wanted to do was add some form of 'loose' edge, that denotes connectivity but that doesn't imply that the shapes need to be directly adjacent. then i could generate a dungeon that's got a start-to-end initial graph with a big long loose edge between the rooms, and so then graph expansion could start by cutting into that edge.

    (this also also comes up when thinking about graph subdivision -- generating one big 'world map' and then having each node in that graph generate its own area graph. if the world graph spits out, say, that the forest is connected to the town and the canyon and the river, it's difficult to figure out how to generate a starting graph that correctly places the respective exits to the town, canyon, and river areas while still retaining enough free space for the deck of 'forest' graph expansions to do something interesting. but if it's possible to just place those nodes correctly to start and hook them all to some central room with loose edges, then the graph is still very minimal while still having the placement guarantees that are needed.)

    it turns out for dungeon graphs, what a 'loose' edge represents is pretty clear already: it's a hallway, and an expansion that happens along that edge can just place the new room somewhere where it overlaps with the hallway, and then cut the hallway into two pieces.

    so this two week project was about doing that. i did... part of it?

    partly the thing is that since haskell lets you divorce code so thoroughly, this was really about changing like three separate libraries:

    1. Data.GraphGrammar itself needs to start caring about edge data, and it needs to do things like e.g., provide a lens for embedding edge shape data into the edge type, in addition to providing a lens for embedding node shape data into the node type. it also needs to start tracking new edges, and providing them to the embedding function in the same way it tracks new nodes -- basically it's been treating edges as unimportant, second-rate values for a while since i haven't ever actually been using edge data for anything
    2. Shape.HexEmbedding needs to figure out how to do something with that new edge data. this means a new type (LooseEdge Hex, specifically) for use as edge data, plus new hex collision code in Shape.Hex that can handle the relevant shapes. my currently-existing collision code is incredibly slow in all cases save for hex/hex collisions, so i need to figure out, at the very least, good hex/line and line/line collisions.
    3. Generator.DungeonGen needs a new generator that actually uses these new types to design new kinds of dungeons that use loose edges.


    also, when i actually render these things, graphToSVG needs to understand what the new edge data type means and how it can be rendered correctly.

    so i ended up doing the first item entirely, which is just abstract data-shuffling, and parts of the second and third items -- i have some new collision code written, but not enough to actually use, and i can successfully generate an initial graph for a circuit-based dungeon (or, theoretically, a linear start-to-end dungeon). i also figured out the rendering for hallways, which is neat because these are officially the first concave shapes that this setup can render!

    i have some in-progress screenshots of what those graph outputs look like:
    • first attempt at long edges
    • starting to attach the edges to nodes in reasonable ways + embed them to the grid correctly
    • drawing the edges as space-filling map sections
    • finally assembling

    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
  • Mar. 6th, 2019
  • xax: yellow-orange {7/2} knotwork star, pointed down (7sided)
    Tags:
    • code,
    • programming
    posted @ 08:07 pm

    i made a new macro for twine!

    first, you can check out a small demo that uses it.

    this is for twine 1.4, and is a macro that adds some slightly more advanced form controls, with inline displays. this adds three new macros: <<toggle>>, <<block>>, and <<endblock>>.

    a <<toggle>> is in the form <<toggle $varname groupname value display>>, and it generates some clickable text ("display"), which, when clicked, will set $varname to value. if there are multiple toggles with the same groupname, clicking one will deselect any others.

    a <<block>> is in the form <<block groupname>>...<<endblock>>. when a toggle is selected, it will also inline refresh the text inside any blocks with matching groupnames.

    (if you want text to be refreshed based on multiple toggle groups, you can nest <<block>>s with no issue.)

    an example of use:
    <<toggle $foo foo foo_a "foo a">>
    <<toggle $foo foo foo_b "foo b">>
    <<toggle $foo foo foo_c "foo c">>
    
    <<block foo>>\
    $foo is <<print $foo>>
    <<endblock>>
    
    <<toggle $bar bar 1 one>>
    <<toggle $bar bar 2 two>>
    <<toggle $bar bar 3 three>>
    
    <<block bar>>\
    $bar is <<print $bar>>
    <<endblock>>


    would display two sets of three selections, with a "$var is ..." blurb beneath each one that's automatically updated every time you change a value.

    ( here's the code )

    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
  • Mar. 1st, 2019
  • xax: yellow-orange {7/2} knotwork star, pointed down (7sided)
    Tags:
    • code,
    • gamedev challenge,
    • programming
    posted @ 12:40 pm

    uniform-tagged shaders with gpipe

    okay, two week projects

    i didn't do something for february 1-14 since i was sick for that entire stretch (i was sick for like three weeks and it sucked), but i did get some stuff done for the 15-28 period.

    gpipe has this enormous infrastructure for type-safe shaders, so that you can never write the wrong kind of vertex pipeline to a shader, or write the wrong thing to a buffer generally, and all of its weird internal types are instanced to a mess of typeclasses so that they get marshaled correctly automatically. it's real nice. but what they don't do for you is handle uniforms. or rather, they handle uniforms (you can write to them and use them just find) but they don't have any kind of construction for "this shader uses uniforms of type x y and z and so before you run the shader you have to provide x y and z values". basically they don't really surface the shader dependency on uniforms into the type system at all; a shader is just a shader and if you need to write uniforms for it then you need to manage that yourself.

    in my existing code, the only uniform i really used was the camera, which was updated per-frame at the start of each frame, and while i had lots of ideas for useful shaders i could write, i basically had no way to store or structure them. my code ran using a list of RenderUpdate os values (where os is a phantom type value that represents which rendering context the render event came from), and what that means is that if i wanted to keep that same basic infrastructure of having a render cache full of render actions, i would need to hide all the shader information, since haskell doesn't have heterogeneous lists -- i couldn't have a list of like, RenderUpdate os ShaderFoo and RenderUpdate os ShaderBar.

    so already i was thinking this would have to be an existentially-quantified thing: have something like

    data RenderObject os = forall s. RenderObject
        { shader :: CompiledShader os s
        , chunks :: [RenderChunk os s]
        }
    


    so that the shader could vary based on the type of the chunks, but that would be hidden inside the type so that the overall type is still just RenderObject os, so i could continue using a simple list of render updates. additionally, i'd need some way to attach uniforms to shaders, which would be an entire other step that would need to be done in a similar fashion -- stuff the uniform type in an existential somewhere, so that i could automatically set them in some fashion.

    (i should also say that i put off working on this for a while since i only had a hazy idea of how this would work, and i figured it would be really complicated and involve a lot of weird type stuff.)

    so with all that being said let's talk about the code i wrote.

    ( lots of code and code talk under the cut )

    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
  • Feb. 12th, 2019
  • xax: yellow-orange {7/2} knotwork star, pointed down (7sided)
    Tags:
    • code,
    • programming
    posted @ 10:39 am

    reactive-banana and behavior lag, among other things

    backstory: i've been using the reactive-banana FRP library to handle the i/o for my haskell stuff, because if you recall i spent one of these two-week projects hacking together a haskell-style i/o management setup that i kept tying myself into knots with and eventually somebody was like "this sounds like you're reinventing the concept of FRP and you should maybe use one of these libraries". then i got wrapped up in trying to comprehend how reactive-banana works and is expected to be used, and how you can practically use it to do things. it's not exactly the most clear, so i figure it's worth it to post about my HARD-EARNED UNDERSTANDING of how it works.

    ( this post assumes that you basically understand haskell and that you've used reactive-banana enough to have some problems with it. )

    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
  • Jan. 4th, 2019
  • xax: yellow-orange {7/2} knotwork star, pointed down (7sided)
    Tags:
    • programming
    posted @ 02:55 pm

    graph grammars

    so when i was doing my old dungeon generation/landscape generation code i was using a handmade graph grammar system that was cut into two halves: the actual graph expansion code, which did subgraph identification and proposed random expansions; and a graph embedder that was responsible for successfully placing a proposed expansion or rejecting it as impossible.

    it's a pretty simple setup but it works, but also it's kind of impossible to declare or run various kinds of more complex expansion/embedding rules -- or at all handling rules that involve expansions dependent on the embedding like e.g., connect two random unconnected nodes: the expander can't sort possibilities to only try adjacent nodes; it would have to try adding an edge between every pair of nodes. my graph embedder also couldn't really handle expanding an edge into a node, since that would involve pushing existing nodes apart, and i never worked out all the physics-like code for handling those kinds of situations. and since that was the code i was using i started to think like "oh well there are some problems like solid map generation that graph grammars just aren't good at", when actually the real problem was that my code was very bad at them; they're entirely reasonable to use for all sorts of stuff that my code is atrocious at.

    so i'm thinking i should probably try to fix up the graph expansion and graph embedding code. i wrote a way to extract a graph from a given polyhedra, and so theoretically i could use that to construct some graph embeddings in 3d, but as my code currently exists it would be very poor at building a real world map. very poor support for wraparound. but that's an artifact of my bad embedding more than anything else, so, maybe it's time to actually try to fix that.

    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
  • Dec. 18th, 2018
  • xax: 10-sided interlinking star (screenshots)
    Tags:
    • programming,
    • screenshots
    posted @ 12:52 pm



    please reblog my photoset etc

    i've been working on rendering more complex shapes, because currently all i can really manage is prisms and prismatoid-style shapes. so i was like, let's try generating all the platonic solids, and then all the archimedian solids and the catalan solids, and go from there. there are lots of neat shapes out there that are computationally-feasible to generate, it's just, uhh i'm bad at shapes.

    historically when i've rendered stuff i've just had a polygon soup, but for this i wanted to have enough information around so that i could do things like truncation or generate duals, so i'm using a wing edge data structure, and only just starting to make my way around understanding how it works. but at least i have it working well enough to render some basic polyhedra. theoretically when i get duals working i'll be able to dual the cube and the dodecahedron to get the octahedron and the icosahedron, and the first few catalan solids are all kleetopes of simple polyhedra, which means if you kleetope the platonic solids and then get the dual of that shape you've gotten an archimedian solid. turns out by repeating various klee / truncation / dual operations you can end up constructing a whole bunch of complex polyhedra

    but for right now i'm just glad i got three rendering

    e: got prisms rendering


    this has given me some understanding of how to algorithmically generate polyhedra in this format. antiprisms might not be too tough either.

    e2: got duals generating

    now it's time for either antiprisms or kleetopes. with kleetopes and duals it's possible to generate many of the archemedean/catalan solids. antiprisms are another infinite family of shapes. getting something like trapezohedrons generating would be neat, and those are apparently the duals of antiprisms. diminished trapezohedrons are neater-looking but i don't think there's any easier way to generate them aside from just generating them.

    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
  • Oct. 15th, 2018
  • xax: purple-orange {11/3 knotwork star, pointed down (Default)
    Tags:
    • gamedev challenge,
    • programming
    posted @ 10:49 pm

    okay so two week projects

    i didn't write a writeup for the last one so this is two-in-one.

    so for the latter half of september i tried out procedural magic systems, and ultimately i wrote down a lot of notes and concepts but i didn't get a lot specifically done. part of the reason why is that i tried using my possibility space code to handle it, and actually this is something that it's remarkably ill-suited for

    the possibility space code really works the best when there's a really large possibility space and you just want a handful of mostly-unique items. with a procedural magic system, there are a lot of complex constraints. there's lots of things like "for each rank in the magic system, it should usually generate one of each magic type (healing, control, summoning, attack, etc) available in the system, and it should usually not generate two spells of the same type", or "when generating magic effects, it should usually either reuse the same adjective often or never (e.g., holy healing vs. holy bolt), but not sometimes" that the possibility space code really couldn't handle very well. it's definitely possible for the possibility space code to have some part in generating systems, but it definitely can't handle the main generation logic.

    so basically in working on that, i got to the point of realizing that and stopped, because i didn't want to totally start over w/ some bespoke haskell code. but this is something i'd want to revisit at some point, b/c, well, hey procedural magical systems. it'd be neat.

    here are some outputs i generated:

    • 001
    • 002
    • 003
    • 004

    (one of the things that i realized pretty quick was that what i wanted was... each magic school as having a messy cluster of 'domains'. i started with 'light' and 'dark' and very rapidly realized the words and concepts i were using were actually from several distinct subclusters. so i broke them apart into a set of relations: 'light' as in actual light, associated with 'sun' and 'sacredness' and 'purity'; 'dark' as in shadow, associated with 'moon' and 'occult' and 'decay'. and it'll first pick one domain and either a random associated domain or the opposing domain, and during generation it can generate 'mystery' events that would pick a new associated/opposing domain and start to generate spells of that domain.

    this has an issue where it doesn't really handle comprehensive science-type magic very well -- there would be no magic schools about converting earth air water and fire magic into each other. it's always got a specific elemental flavor. that's something i'd have to think about how it would work.)


    the other thing i worked on, for the first half of october, was input stuff again. kind of a continuation of this thing. i'm using the reactive-banana FRP framework still, which, is okay. as mentioned previously, there's a bit of an issue with the rendering -- my understanding is that the event network (which is its own monad thing) expects to effectively be the 'main loop' of the program, and handle all input (via handlers firing events) and all output (via an event stream of IO ()). but that's not feasible for real-time rendering, and gpipe does all its render stuff inside a different rendering monad that can't really be converted to an IO () action. so i'm not entirely sure how i'm gonna get everything to work together yet.

    that being said, this two-week section went a lot better than the prior two-week section. the goal was to figure out automatic synthesis of input handlers, so that i could just make some inert data type like a Menu a or Form a or whatever, and populate that as desired, and then hand that over to get an Event a out of it.

    let's talk about input handlers for a bit

    on the lowest level, GLFW provides callbacks for raw input events. these are things like 'key pressed' or 'cursor moved' or 'mouse button clicked'. there is only minimal state recorded (only the control keys -- shift, alt, ctrl, etc), so even if you want to know where the cursor was clicked then you need to maintain your own state, because as far as GLFW is concerned 'mouse click' is wholly separate from 'cursor position'.

    so last time i put together some slightly more structured events, most notably click events that tell where they were clicked at. but. ultimately you don't want to be thinking at the level of individual input events at all. in my previous attempts at handling input, i generally had one gigantic handler that took the raw events and did stuff with them according to game state. stuff like 'did we get an up/down arrow press?' 'are we in a menu?' 'if we are, and we got an up arrow, and the index of the menu isn't already the top, then move the index up one' 'otherwise if we got a down arrow, and the index of the menu isn't already the bottom, then move the index down one' and so forth and so on for every possible action. if the game is in a certain state -- menu, world movement, paused, dialog box, shop screen, config screen -- then that state needed to be recorded and shoved into the scope of the input handler, and then the input handler would look at its available state and do whatever it needed to do, given the state and the event.

    i would also generally have to manually construct all the menu layouts. if it was something easy like a list of options, that was pretty easy to automate. but something like an options screen, with lots of different labels and toggles and selection fields, i would have to literally got, for every single thing, "okay this is at coordinates x,y and it extends w,h and when it's clicked it should set this state", over and over and over again, and test a bunch to make sure i hadn't messed up and rendered overlapping items. it was a giant hassle.

    for this, in haskell, i really wanted to avoid that entire nightmare. i wanted the program to do all that for me.

    think about filling out a web form: you might click on some checkboxes, click on a select box, mouse down through the drop-down box, click again to select an option, click on a text field to focus it, type some text in, click on a separate text field to focus it, type different text, click on a radio button, click up through a number field, mousewheel scroll to get to the right number, and finally click on a submit button. there are a lot of low-level input events happening there, and none of them really escape the context of "making a form". so instead of thinking about each raw input event as it comes why not restructure things so that you're expecting an input stream of the form itself? when the form is done (or cancelled) it'll emit an event, and until then all the low-level input will be automatically handled by its own thing, and i as the programmer writing more code wouldn't have to ever really touch any of the input events or think about what they're doing.

    what that actually means is writing code that would 1. auto-generate a layout for a given ui thing, maybe with some hinting 2. manage internal state like 'is this checkbox checked', and suitably generate render update actions when anything is selected or deselected or w/e, and 3. synthesize a new event stream from the raw input event stream. what this accomplishes in haskell is that it takes user input and treats it like any other kind of value -- a Form a is a Functor and can be mapped over and changed and composed and ultimately extracted into an event stream, which means that it's possible to treat these values as 'haskelly' values that you can reason about, instead of ephemeral things that are piecemeal assembed inside a handwritten event handler.

    this kind of thing is basically entry-level for actual gamedev toolkits, but uh i've never actually gotten around to doing it. mostly because i haven't really been using haskell for realtime stuff until very recently.

    anyway i did it, is the short version. i vaguely remembered using reform as part of happstack for my web server, and they have a form type that is an applicative. i also remembered they mentioned that reform was based off of digestive-functors, which was a simpler implementation (reform has a whole proof types thing that i don't need, at least yet) so i decided to check out how they did it. this lead to some interesting hacks.

    digestive-functors defines a FormTree type that does a wild GADT hack.

    so a big part of the usability of this is that we want to have an Applicative instance for these types, right? so we can say (+) <$> number 0 10 <*> number 0 10 or whatever, and have a Form that runs that code when it's evaluated, without the programmer having to really care about which numbers were selected or where exactly every pixel was clicked. and that presents a problem, since to work as a form element, it needs to cache its original constructor. this is what i tend to think of as the heterogenous infinite type problem: if we have a type Foo b a that stores a value of b and can convert it to type a, then anything that stores a Foo b a needs to include the type variables a and b in its type signature, even if we only ever pull an a out of it. this means it's impossible to make that type do anything useful, because the types will never line up (e.g., can't instance Functor b/c it's not Foo b a -> Foo b c, it would have to be like Foo b a -> Foo (Foo b a) c, which is not... robust. consequently any type that has to cache its transformations is pretty impossible.

    except you can use existential quantification to avoid that, by just hiding the type variable:

        data Deferred a = forall b. Deferred b (b -> a)
    
        run :: Deferred a -> a
        run (Deferred b f) = f b
    
        instance Functor Deferred where
            fmap f (Deferred b g) = Deferred b (f . g)

    which is actually the first time i've actually seen a use for existential quantification.

    this gets subsumed under GADTs:

        data Deferred a where
            Deferred :: b -> (b -> a) -> Deferred a

    and that also solved a problem i was having with checkboxes. so a checkbox constructor should always return a [a] value, right? like that's how checkboxes work. but without GADTs there's no way to force a constructor like that.

    so my type ended up looking like

        data Form a where
            Checkboxes :: Eq a => [(String, a)] -> (a -> Bool) -> Form [a]
            Radios :: [(String, a)] -> Int -> Form a
            Textbox :: (String -> Either String a) -> Int -> String -> a -> Form a
            EnumSet :: (Enum a, Show a, Read a) => a -> a -> a -> a -> Form a

    to represent your basic ui elements. the thing with that is, well, that's not a Functor, right? imagine running fmap head on Checkboxes [("foo", 1), ("bar", 2), ("baz", 3)] (const False). Checkboxes can only construct a Form [a], so you can't reconstruct the fmapped value with Checkboxes in the same way that you could with Radios.

    this is where we cheat:

            Pure :: a -> Form a
            App :: Form (x -> a) -> Form x -> Form a

    now you can implement Functor (and Applicative) by just shoving everything into the App constructor:

        instance Functor Form where
            fmap f v = App (pure f) x
    
        instance Applicative Form where
            pure x = Pure x
            (<*>) f v = App f v

    and then when you run it, you can extract those functions and evaluate them.

    but wait, i hear you say. that's not a valid Applicative instance! it breaks the Applicative laws! pure id <*> v === v no longer holds! but. doesn't it? after all in every possible evaluation path the same value will be generated. sure, internally it's an App (pure id) v constructor instead of a v constructor, but like, internally in haskell id . f would be a different thunk than f, so...

    (actually that's maybe not true; i am not really that familiar with how haskell thunks work. but you get the idea.)

    anyway the actual function is pretty messy due to the way rendering interacts with events -- the full type is buildForm :: MonadMoment m => RenderState os -> Event FauxGLFW -> Form a -> ([RenderUpdate os], m (Event a, Event [RenderUpdate os])), which returns 1. a list of initial render actions, which need to be rendered when the form is first displayed, and 2. a tuple of event streams in the FRP monad, one corresponding to form submissions and the other corresponding to intermediate render updates (e.g., text being typed or checkboxes/radios being selected or deselected). in actuality it's still super unfinished, since i only wrote rendering/behavior handlers for the checkbox constructor. text input is a mess.

    anyway i'm feeling tentatively positive about this. i still need to figure out how to actually use that Event a value to do things; right now i'm just debug printing it, but presumably the game would be in a state where it's prepared to do something with the a events it recieves. and i still have no clue how to selectively enable/disable forms, in practice. so opening and closing ui boxes. still a very primitive thing.

    anyway i'm sure i'll figure it out at some point.


    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
  • Jul. 29th, 2018
  • xax: purple-orange {11/3 knotwork star, pointed down (Default)
    Tags:
    • programming
    posted @ 01:24 pm

    so, gpipe

    out of the various haskell graphics libraries i've used, gpipe has so far been the best. that being said, the list of haskell graphics libraries is pretty short.

    lambdacube was... okay. well, it was okay up until i hit the point where there were two different ways to specify uniforms, neither of which seemed to work, and there was precisely zero documentation. also its bespoke shader language was like haskell up until it wasn't, and also had very minimal documentation. it was kind of a mess.

    opengl (not opengl-raw) papers over a few arbitrary parts of the opengl state machine, while also not even exposing other parts of it, so that any non-trivial program would need to use both opengl and opengl-raw and just hope that their raw calls weren't mucking about with anything that opengl was assuming was safely encapsulated.

    gpipe, though... the degree of encapsulation it has over actual opengl is pretty amazing. all the shaders are written in plain haskell, in the same file as the rest of your code. gpipe is really good at communicating with the haskell universe, in ways that sometimes border on the magical. opengl made it clear that you were pulling levers on a big opengl machine, and lambdacube had you naming and dispatching uniforms according to the opengl model. in gpipe uniforms just kind of... vanish. a lot of opengl concepts vanish, and are replaced with very haskelly functions and types that behave in the way you'd expect haskell functions and types to work.

    there are some downsides, though. gpipe is very type-safe, but that type safety comes at a cost, and that is some of the most byzantine type signatures i've ever seen in non-theory code.

    take for example texture sampling: sample2D returns a ColorSampleable c => ColorSample x c. a ColorSample is actually just a type of Color c (S x (ColorElement c)). a ColorElement is a type family within the ColorSampleable class that has like a dozen instances that seem to map types between RGB* and * (like, RGBWord to Word). but those RGB types are a type family in the Format typeclass, and count components. so Color RGBWord a is 'actually' a V3 Word, with the V3 from the RGB part (vs. RGBA or RG or w/e which would have V4 or V2), and the Word from the ColorElement instance.

    gpipe functions are rife with this kind of thing, having two or three type variables that are peppered across the argument types repeatedly and bound to various typeclasses. when they all infer correctly, it's great. when they don't, i had to dig around looking: is it that it can't correctly infer the type (which it frequently can't); or am i actually giving it the wrong kind of data, and in that case what actually is the right kind of data? is it only the wrong kind of data because somewhere else i'm using a different type that causes one of these type variables to be inferred as a non-matching type, and, if so, where is that other value?

    the lack of common opengl reference points can get kinda confusing too, especially in conjunction with the above. if i want to turn on the depth buffer, then i gotta be calling `drawWindowColorDepth` instead of just `drawWindowColor`, sure. but that also takes a different type of argument in several ways -- i need a FragmentStream that has a FragDepth value, and i need to provide depth comparison data for the shader pipeline. i also need to have initialized a Window with a DepthRenderable value. or if you want to turn on alpha blending, you need to provide a blending option for your `drawWindow*` calls, sure. you'll also need to have your window initialized with a `Format` type that includes an alpha component, which changes some of the types needed in your shader for certain calls (like clearing the window).

    these are things you can learn from checking the type signature, and that's generally how you learn Haskell frameworks, but there's so much type information, generally obfuscated behind chains of typeclasses and type families, and it's in places that are very weird if you're already familiar with how opengl works behind the curtain. none of that is bad as such -- well, the wacky type inference is bad -- but it definitely has caused a fair number of stumbles so far.

    for example, their 'hello world' demo never actually directly refers to the shader render data. a Shader os s a has a state of s, but you can't directly look at the state data; only a few functions expose that via taking a function of type s -> {whatever} and using its result. and that's fine but in the demo that was not clear to me at all, because in the demo it's only ever indirectly mentioned with functions like id or const. and it's not like there's any particularly clever trick there, it's fairly obvious, but there are so many clever tricks going on that it really obfuscates the overall code and data flow since everything has like three type variables attached and they interact with each other in weird ways.

    all those types do a lot of work: they wrap up and abstract away the underlying opengl setup, and do things like make it impossible to write the wrong kind of data to a shader, or the wrong color type to a texture, or whatever. that's very nice, and useful, but it also causes some weird issues itself. partly it's that all abstractions are leaky, right, and so there's always the looming threat of, say, my shader render state taking up too many slots and so being too big to be sent to the GPU and causing an error. which is an invisible constraint until it happens, since as far as gpipe is concerned the shader render function is just a normal haskell function, it's just that if you know opengl you know that that's the entry-point into the shader system and that there's a finite amount of stuff you can stuff through it at one time.

    all that being said though, this has been so much clearer, easier to use, and full-featured than any of the other haskell rendering setups i've checked out. i installed it uhh eleven days ago, and i already ported over my object rendering code, plus i set up a basic camera, plus i got textures working, UVs being synthesized for procedural models, and basic sprite font code working. some of that's just because i've been getting better as a programmer, but some of that is definitely because gpipe manages the most tedious parts of rendering and wraps up the rest of it in a way that it's difficult to break your rendering state.

    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
  • Jul. 18th, 2018
  • xax: purple-orange {11/3 knotwork star, pointed down (Default)
    Tags:
    • programming
    posted @ 09:02 pm

    haskell platform wrangling

    oof so i tried updating my version of the haskell platform

    or rather, i was to get back into doing 3d rendering in haskell, and that means using a rendering library. lambdacube is out because of its whole busted uniforms thing. for a while i was like "well it can't be _that_ hard to just roll my own", so i looked at the guts of lambdacube and nope actually it is very complex. like, ultimately comprehensible, but very complex. so before i go that route there's one more rendering library to look at: GPipe. but installing it, oof.

    okay so i've been using a pretty outdated version of the haskell platform. still using ghc version 7.10, which is four years out of date, and a pretty old version of base. so i tried to install gpipe and it was like, your stuff is too old, can't install. that's actually kind of a good sign, because it does mean that this library is newer than four years old. and i was like, well, all my stuff is very old; i should upgrade. so i upgrade. except then i still couldn't install gpipe since now my version of base is too new.

    anyway the biggest issue is that in a recent base update they switched Monoid from being freestanding and defining both <> and mempty, now Monoid only defines mempty and expands Semigroup, which defines <>. this is totally reasonable and makes it much easier to use semigroups. but it does break all existing monoid instances, since now you also need a semigroup instance. so i had to unpack the gpipe library files and tweak their dependencies and add in a few new lines about semigroup instances, and it seems like that's all. i got the hello world program to run, so, that's a start. really hoping i'll be able to easily shift over all my lambdacube code to gpipe and see if i can then actually set some uniforms so i can do things like 'scroll' or 'rotate the camera', which are pretty critical to doing any games stuff.

    but also, upgrading my haskell platform also broke all my existing cabal sandboxes, and also now i'm gonna have to pull that same unpack-and-tweak trick for the hell game dev environment because some of its dependencies are outdated and impossible to satisfy now. that'sssss gonna be fun upgrading on the production server.

    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
  • Mar. 7th, 2018
  • xax: purple-orange {11/3 knotwork star, pointed down (Default)
    Tags:
    • gamedev challenge,
    • programming
    posted @ 06:12 pm

    the myth generator is... working slightly more. this is essentially a continuation of this post about the nemesis system, since i'm using the same basic infrastructure. at that point i had gotten an extremely rudimentary form of actions running.

    my current gamedev challenge thing is myth/history generation, which is basically the same exact thing only with a different dataset. the main issue is actions: what actions can do, what effects they can have, and how actors select actions. currently i've added
    1. predicates on actions, so that who can run them can be limited
    2. mutations on actions, so that running an action can change any participant

    i've also tweaked some of the generator settings and made it more argument-driven, because previously i had to tweak the code in main and recompile every time i wanted to change the type of output.

    these are big improvements, but there's still a lot more to be done before this is anywhere close to being usable. on the technical level, i need to add the ability for actions to create or destroy actors (think: a city being founded or abandoned, an artifact being constructed or destroyed, characters being born or dying).

    on a structural level, i need to add in the concepts of time and space -- it's currently possible to have locations and to require two characters to be in the same location to act together, but that would mean having a bunch of actions like ACTION %1 ? %1.location == foo { %1.location @= bar } ACTION %1 ? %1.location == bar { %1.location @= foo } to denote a pathway between the locations foo and bar, and then have every other action have a specific predicate %1.location == %2.location. and that would enforce the same exact map for each run-through of the history, so no randomly-generated maps. it would be a complete mess. what would be really neat is hooking up the graph generator to this, so that the landscape is enforced via having an actual graph representation of its shape, but neither this nor that are really in good enough shape to handle that right now. and there's no concept of time elapsing or any kind of progression.

    it's technically possible to enforce a 'plotline' in actions now, but it's so unwieldy as to be impractical. what would be nice is, like, actors having a set of drives based on their characteristics, and picking actions based on what's allowed by their drives / weighed based on their drives. i don't really know how i would codify that, though -- part of writing my own syntax is figuring out ways to meaningfully and compactly encode things like that, and i don't really have a solid concept yet.

    that being said i should probably start putting together an actual myth-styled dataset now, instead of just reusing the nemesis one. not that that couldn't be a part of it, just, a myth generator necessarily has a much larger scope. meaningful myth generation also definitely requires actions generating new actors, especially if the myth generator only starts with a primordial god or an endless abyss or what-have-you.

    ( here's a sample output )

    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
  • Feb. 26th, 2018
  • xax: purple-orange {11/3 knotwork star, pointed down (Default)
    Tags:
    • gamedev challenge,
    • plants,
    • programming
    posted @ 04:34 pm

    okay gamedev challenge reportback

    again

    i mean as should be expected at this point

    this was on my list as "evolution/genetics system", and it was something i kind of put on there because it was a thing i had always wanted to do, but it was down near the bottom when i just started listing unrealistic longshot ideas, because like... genetics seems hard? genetics seems really difficult and completely unrelated to any code i've written ever.

    so then it was kind of a surprise to be idly thinking about my enumeration code and have this train of thought like "it would be nice to be able to walk around inside the deck and see the values as they change, rather than only having a single index that tells you nothing about the structure of the value. i think you could tracked how values were added together you could get a list of all the independent axes of interaction, so you could actually get an n-th dimensional rectangular prism representation of the enumerable space you're building up. and then it would be possible to get a list of all adjacent values, and randomly walk through the possibility space. ...you know that is sounding a lot like random mutation in a genetic space"

    SO THEN i spent a while putting that code together (and as you have seen in some of the posts about Data.Enumerable here i figured out lots of it but not all of it), and then i started working on actual plant rendering, since plants are a decent start in terms of simulating evolving things. at first i started by reimplementing some of my really old l-system code, and then trying to code up various l-systems that were mentioned in the algorithmic beauty of plants. i didn't get super far in, only to chapter 3, but that's deep enough to get to some fairly fiddly systems.

    after a while i kinda put together my own l-system setup, by having a plant data structure that could generate an l-system expansion; there are some screenshots of my generated plants on my code screenshots tumblr.

    but the actual simulation of plants growing in a real 3d space is... a lot more intricate than just running an l-system. i have been reading a lottt of academic papers over the past few weeks, some provided by friends who have journal access, but mostly pirated off of scihub, so, thank you scihub for existing or else i would have had to spend like $500 on downloading papers (or obviously more realistically just not being able to do it at all)

    i'm not going to get into the nitty-gritty details, but from my reading it sounds like it's still a bit of a hassle to manage plant growth as driven by light and water absorption. l-systems just kind of expand, they don't really respond to their environment, and while there's some study into environmental-response l-systems there's not a huge bulk of papers. there are some really neat l-systems that do manage nutrient distribution, but it seems like they do that by running two completely separate 'root' and 'plant' systems and kinda gluing them together at the base and somehow sending communications through them. so while i did get a 'genetic code' (my plant data structure) and a way to mutate, crossbreed, and grow it, i didn't get a full simulation working, since that would mean... actually simulating growth and pollination and seed dispersal, or at the very least a fitness metric that vaguely imitates that, leading to a second generation being selected.

    still, all of this is going to be really invaluable for when i return to this at some point and try to get nutrient flows working. the goal here is a little plant simulation that properly absorbs water from the ground and light from the sky and stores those in specific parts of its body and uses energy stores to grow further roots and leaves. that's a way's away, not the least because i'd have to figure out how to formulate that as an l-system, but i made a lot of good progress here.

    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
  • Feb. 14th, 2018
  • xax: purple-orange {11/3 knotwork star, pointed down (Default)
    Tags:
    • code,
    • programming
    posted @ 12:35 pm

    this is the full code file for the new-and-improved Enumerable data type, with notes and pleas for help, in preparation for asking #haskell if they can help

    ( code here )

    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
  • Feb. 11th, 2018
  • xax: purple-orange {11/3 knotwork star, pointed down (Default)
    Tags:
    • code,
    • gamedev challenge,
    • programming
    posted @ 02:50 pm

    hey i have a math problem i was hoping somebody could solve

    okay so as you may already know i wrote some haskell code for enumerating through values, and that was real neat and useful and generally speaking i accomplished a lot with it.

    now i'm trying to expand it. specifically: the code as written lets you enumerate through values, but in constructing the values any sense of relative position is lost. you can do something like V2 <$> from [1..10] <*> from [1..20] and that will let you enumerate through all 200 values, but if you have an index representing, say, V2 3 7 (index 62 btw) you can't say "i want to see the indices of all values that are one unit adjacent". basically there's no way to inspect a deck.

    SO i had the idea to track mergings of decks. change the data type to

    data Enumerable a = Enumerable
      { count :: Integer
      , segments :: [Integer]
      , selector :: Integer -> a
      }


    and make the applicative instance

    instance Applicative Enumerable where
      pure v = Enumerable 1 [1] $ const v
      (Enumerable c s f) <*> (Enumerable c' s' g) =
        Enumerable (c * c') (s <> ((c *) <$> s')) $ \i -> let
            j = i `mod` c
            k = i `div` c
          in f j $ g k


    so that V2 <$> from [1..10] <*> from [1..20] value would have 'segments' of [1,10], meaning that there are two axes: one that increases/decreases in steps of 1 and one that increases/decreases in steps of 10.

    and so then you can decompose indices into movements along those axes, like so:

    decompose :: Enumerable a -> Integer -> [Integer]
    decompose (Enumerable _ s _) i' = snd $ mapAccumR split i' s
      where
        split :: Integer -> Integer -> (Integer, Integer)
        split i c = (i `mod` c, i `div` c)


    and it will say: okay index 62 there decomposes to [2,6]. and then you can do something like...

    mutate :: Enumerable a -> [Integer] -> [[Integer]]
    mutate e i = tail $ recombine $ zip i mutvals
      where
        mutvals = uncurry spread <$> zip (maxAxes e) i
        spread m c = snd <$> filter fst
          [ (c > 0, c - 1)
          , (c < m, c + 1)
          ]
        recombine [] = [[]]
        recombine ((base,vars):rs) = ((:) <$> pure base <*> recombine rs)
          <> ((:) <$> vars <*> pure (fst <$> rs))
    
    maxAxes :: Enumerable a -> [Integer]
    maxAxes e = decompose e $ count e - 1
    
    mutateI :: Enumerable a -> Integer -> [Integer]
    mutateI e i = construct e <$> mutate e (decompose e i)
    


    to have it spit out all decomposed values that are one unit distant. so then:

    λ> let foo = ((,) <$> from [1..10] <*> from [1..20])
    λ> select foo 62
    Just (3,7)
    λ> mutateI foo 62
    [52,72,61,63]
    λ> select foo <$> mutateI foo 62
    [Just (3,6),Just (3,8),Just (2,7),Just (4,7)]


    and now you can navigate around within a enumerable deck. very neat. i'm currently using this to put together some really basic plant mutation/crossbreeding code (since you can also crossbreed two indices by decomposing them, and then picking values randomly from the two sets) and it's generally working pretty well.

    the PROBLEM is:

    the original formulation of Enumerable is also an Alternative and a Monoid:

    instance Alternative Enumerable where
      empty = Enumerable 0 $ const (error "no value")
      (Enumerable c v) <|> (Enumerable c' v') =
        Enumerable (c + c') $ \i -> case i - c of
          x | x >= 0 -> v' x
            | otherwise -> v i
    
    instance Monoid (Enumerable a) where
      mempty = empty
      mappend = (<|>)


    BUT: this new formulation would require some additional segment tracking. Applicative is really easy to store because it just creates n-dimensional prisms with totally regular axes. Alternative is a lot trickier, because it would involve... adding offsets? something? the actual lookup math above is really simple, but if i wanted to store that as a segment i'm pretty sure i would need to stop using just lists of integers.

    i've looked at a lot of math and i've tried to mark out the patterns it would make in indexing and i have yet to really figure out what i should do with it. i think the GOAL is to have it treat all choices as adjacent? so like pure 1 <|> pure 2 <|> pure 3 <|> pure 4 would be different from from [1,2,3,4] because the alternative would consider 1 adjacent to all of 2, 3, and 4, whereas the basic enumeration would only consider 1 adjacent to 2. (and then from [1,2] <|> from [3,4] <|> from [5,6] <|> from [7,8] would consider 1 and 2 to be adjacent to each other and to all of 3 5 and 7. ditto 3 & 4 with 1, 5, and 7.)

    it's just uhhhh i am totally failing to see whatever structure i need to see to codify that up, so, if anybody has any thoughts i would like to hear them.

    (spoilers this two-week project is gonna be trying to make something like this.)

    • Add Memory
    • Share This Entry
    • Link
    • 0 comments
    • Reply
    • Previous 20

Syndicate

RSS Atom
Page generated Aug. 9th, 2025 02:53 pm
Powered by Dreamwidth Studios

Style Credit

  • Style: (No Theme) for vertical