You must log in or register to comment.
I hear Linux is good.
I’m just glad to hear that they’re working on a way for us to run these models locally rather than forcing a connection to their servers…
Even if I would rather run my own models, at the very least this incentivizes Intel and AMD to start implementing NPUs (or maybe we’ll actually see plans for consumer grade GPUs with more than 24GB of VRAM?).
Bet you a tenner within a couple years they start using these systems as distrubuted processing for their in house ai training to subsidize cost.