This was originally posted here a decade ago. I’m happy to see it’s still alive.
I’ve been using some generated assets for a game with voxelized art. I intend to take a deeper look at this and see if it can simplify parts of my workflow.
This is fascinating. I see its powered by weights and probabilities - would this be a very simple ancestor of things like Stable Diffusion that we have now, or would this be on a completely different branch (different approach)
It’s procedural generation but that’s pretty much where the similarities end. People today might use a big generative NN model to do this, using maybe a thousand times as much energy to get essentially the same result. Gen AI is definitely a big step forward in our relentless drive to make software more inefficient in order to compensate for any efficiency gains that the hardware guys come up with.
I always wondered how this compares to the 1999 algorithm Texture Synthesis by Non-parametric Sampling [1]. The results look very similar to my eyes. Implementation here [2] — has anyone tried both?
I’ve been using some generated assets for a game with voxelized art. I intend to take a deeper look at this and see if it can simplify parts of my workflow.
https://news.ycombinator.com/item?id=12612246
[1] https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/p...
[2] https://github.com/goldbema/TextureSynthesis