Today I tested Phi-3-Mini-4k-Instruct-q4-GGUF (3.2B) in my Asus K541UJ with 8GB RAM, an i3 and a GeForce 920M (2GB or 4GB VRAM I think) in Jan (its like LLM studio but OpenSource, its a bad worse since for most of the models you have to upload it yourself, but its OpenSource and LLMStudio is freeware and propietary so I don't believe them) And the models was acceptable, it generated a robotic story and almost got right the first declension in latin (it failed in the ablative sg) and refused to generate smut or tell me the dick size of early medieval knights. Maybe it was because it was a quantised or maybe I didn't know what to ask, maybe I test it again with homework but for what I tested today I give it a 6.5/10, it is better than TinyLlama (1.1B), I don't think I can run something bigger than 3.2B, maybe a q3 of Llama 3 8b?