so it's ok to say "SSD read/write speed", but now that we have something closer to the original meaning of the word, someone always has to point out that "LLMs don't have a soul" (or whatever you think is required for it to count as akchyually reading)
If I can just stand up for the nitpicker - arguably in the uncanny valley it’s more natural to point out it’s not reading (by their definition) than outside it (ssd’s).
makes sense in a philosophical debate or when you're talking to your confused grandparents, but does anyone on hn not know how LLMs work, at least on the level of "tokens, matrices, data, sgd"?
otherwise, that reminder must imply that people do know how it works, and yet they still ascribe to these models some property like qualia, i.e. something other than "being able to turn english into code and compute into shareholder value";
but then if you disagree, why even mention it in the first place? do atheists randomly proclaim "btw god isn't real!" in unrelated conversations with strangers of unknown religious beliefs?
Twenty or so years ago, Levis had a program called Personal Pair (they paired with a firm/product called Intellifit) where you would step into a millimeter wave 3D scanner and get your precise measurements.
Which you could use to get a custom pair of jeans made just for you. This failed partly because the target market turned out to be middle-aged adults (that buy fewer pairs), and not 18-25 year olds like they were hoping. Partly over privacy concerns. Partly the ability of the factory to make jeans to that tight of a tolerance. And partly because it was promoted like it was a novelty ("Consumers love it!"). Mentioned here previously:
But with this software - the tolerances are looser, so the clothing becomes more manufacturable. And the measurements can be anonymous - you don't feel like you're stepping into a TSA scanner for everyone to see.
I hope they are able to make relationships with multiple clothing brands so shopping from home will become less hit-or-miss. The benefit to the brands is going to be fewer returns for size issues.
There should still be privacy concerns, especially with their demo which sends a POST on "Generate". The author suggests the model is 85kB of weights, which could run perfectly well in browser.
> But with this software - the tolerances are looser, so the clothing becomes more manufacturable.
Does it? How do looser measurements help? I assume manufacturer would always take the upper bound of dimensions. Suppose model also predicted your dimensions are higher then they really are, so these two in combination give you an oversized piece of clothing.
Not just oversized - undersized also happens. Most cloth is still cut by hand using large electric saws and it's just not that accurate. (caution: loud music)
Notice that the panels are marked out with chalk and if the operator doesn't stay square to the table, or isn't diligent in marking up the panels, they won't be consistent with the brand's standard sizing.
I mean - ideally a set of panels of a piece of clothing would be cut by computerized laser so it's accurate to what the buyer needs. But that costs too much and takes too long.
I don't understand why the height and weight errors aren't 0 when they are known inputs? If I say how tall I am, why is the model estimating something else?
That's a common phenomenon in model fitting, depending on the type of model. In both old school regression and neural networks, the fitted model does not distinguish between specific training examples and other inputs. So specific input-output pairs from the training data don't get special privilege. In fact it's often a good thing that models don't just memorize inputt-output pairs from training, because that allows them to smooth over uncaptured sources of variation such as people all being slightly different as well as measurement error.
In this case they had to customize the model fitting to try to get the error closer to zero specifically on those attributes.
Yes, but why are they estimating the features when they are already available? They can estimate the other measurements from height etc, and just use the known inputs as is. I don't get the point of passing them through a model at all.
It takes more like 10 seconds. For a large range of height and weight inputs crossed with all option combinations, you could precompute ~10M measurements and return results basically instantly.
Interesting idea. Using a questionnaire as input for an MLP makes sense but the real challenge is designing questions that capture useful signal instead of noise. If that part is done well, the approach has a lot of potential.
My guess, the article itself is clearly AI authored and there are a fair number of us who don't particularly like the writing style. Further, it implies something about the original human's own valuation of this work - if they decided to let the machine handle it, why should I spend my own time reading what they didn't bother to write?
I'm guessing the writing is AI-assisted (there's no fluidity and it has some weirdly placed phrases) but I see they're in Poland and likely not English-language first?
The ancestry finding is the most honest part of this post — training on a uniform blendshape mix but inferring with the same fixed mix was essentially a 3 kg noise floor they built themselves. Elegant fix: just add ancestry to the questionnaire so train/inference distributions match.
The physics-aware loss is interesting too. Including the Anny forward pass so mass gradients flow back through all volume-related params together — rather than solving each of the 58 outputs independently like Ridge — is exactly the right call. Ridge can’t couple params; the MLP hidden layers can.
I’ve been thinking about a similar problem from the opposite direction — instead of reconstructing bodies, I’m working on running small models on very constrained hardware (NanoMind — 2GB RAM Android phones). The “boring model, interesting data pipeline” lesson resonates strongly. Upstream data quality always matters more than architectural complexity.
MLP trained on 8 questions achieves ~0.3cm height error, ~0.3kg weight error, and ~3-4cm for bust/waist/hips measurements.
https://www.mdpi.com/1424-8220/22/5/1885 + some hacking => "we want to productize this"
Haven't seen that one yet. I like it.
That's not how it reads because there is a semicolon in there. It means "This is AI, so I didn't read it".
Also, I'm getting nitpicky here, but LLMs don't ”read".
so it's ok to say "SSD read/write speed", but now that we have something closer to the original meaning of the word, someone always has to point out that "LLMs don't have a soul" (or whatever you think is required for it to count as akchyually reading)
do storage devices have souls?
otherwise, that reminder must imply that people do know how it works, and yet they still ascribe to these models some property like qualia, i.e. something other than "being able to turn english into code and compute into shareholder value";
but then if you disagree, why even mention it in the first place? do atheists randomly proclaim "btw god isn't real!" in unrelated conversations with strangers of unknown religious beliefs?
https://www.youtube.com/watch?v=G1IceHjADbQ
Which you could use to get a custom pair of jeans made just for you. This failed partly because the target market turned out to be middle-aged adults (that buy fewer pairs), and not 18-25 year olds like they were hoping. Partly over privacy concerns. Partly the ability of the factory to make jeans to that tight of a tolerance. And partly because it was promoted like it was a novelty ("Consumers love it!"). Mentioned here previously:
https://news.ycombinator.com/item?id=2444319
But with this software - the tolerances are looser, so the clothing becomes more manufacturable. And the measurements can be anonymous - you don't feel like you're stepping into a TSA scanner for everyone to see.
I hope they are able to make relationships with multiple clothing brands so shopping from home will become less hit-or-miss. The benefit to the brands is going to be fewer returns for size issues.
> But with this software - the tolerances are looser, so the clothing becomes more manufacturable.
Does it? How do looser measurements help? I assume manufacturer would always take the upper bound of dimensions. Suppose model also predicted your dimensions are higher then they really are, so these two in combination give you an oversized piece of clothing.
https://www.youtube.com/shorts/jvQHvz4GlPQ
Notice that the panels are marked out with chalk and if the operator doesn't stay square to the table, or isn't diligent in marking up the panels, they won't be consistent with the brand's standard sizing.
I mean - ideally a set of panels of a piece of clothing would be cut by computerized laser so it's accurate to what the buyer needs. But that costs too much and takes too long.
In this case they had to customize the model fitting to try to get the error closer to zero specifically on those attributes.
> Averages lie about the tails, and a person who gets a 15 cm bust error doesn’t care that the mean is 4 cm.
A variation of that sentence should be mandatory in every scientific paper.
This is definitely manipulated.