Bellingcat
Bellingcat
Is this a real thing? I don’t believe I’ve ever encountered this. I suspect they’re actually being demeaning to men in general, or men who don’t fit their idea of masculinity. I’ve encountered people like that. Though the opposite is more common (men, and women, demeaning women who don’t fit their idea of what a woman should be like, or just demeaning women in general).
Larger models train faster (need less compute), for reasons not fully understood. These large models can then be used as teachers to train smaller models more efficiently. I’ve used Qwen 14B (14 billion parameters, quantized to 6-bit integers), and it’s not too much worse than these very large models.
Lately, I’ve been thinking of LLMs as lossy text/idea compression with content-addressable memory. And 10.5GB is pretty good compression for all the “knowledge” they seem to retain.
I’ve seen this term on Mastadon. I’m actually confused by it a bit, since I’ve always thought replies are to be expected on the Internet.
I think women have a problem with men following them and replying in an overly familiar manner, or mansplaining, or something like that. I’m old, used to forums, and never used Twitter, so I may be missing some sort of etiquette that developed there. I generally don’t reply at all on Mastadon because of this, and really, I’m not sure what Mastadon or microblogging is for. Seems to be for developing personal brands, and for creators of content to inform followers of what they created. Seems not to be for discussion. I.e. more like RSS than Reddit (that’s my understanding at least).
Hmm. I just assumed 14B was distilled from 72B, because that’s what I thought llama was doing, and that would just make sense. On further research it’s not clear if llama did the traditional teacher method or just trained the smaller models on synthetic data generated from a large model. I suppose training smaller models on a larger amount of data generated by larger models is similar though. It does seem like Qwen was also trained on synthetic data, because it sometimes thinks it’s Claude, lol.
Thanks for the tip on Medius. Just tried it out, and it does seem better than Qwen 14B.