[-] circle@lemmy.world 11 points 1 year ago

Oh yes, to top it I have small hands - I can't reach almost any of the opposite edge without using two hands. Sigh.

[-] circle@lemmy.world 1 points 1 year ago

Thanks ill check that out

[-] circle@lemmy.world 1 points 1 year ago

Agreed. YouTube revanced works well too. But are there alternatives for iOS?

[-] circle@lemmy.world 6 points 1 year ago

This is such a good idea!

[-] circle@lemmy.world 4 points 1 year ago

Love the clock!

2

intuition: 2 texts similar if cat-ing one to the other barely increases gzip size

no training, no tuning, no params — this is the entire algorithm

https://aclanthology.org/2023.findings-acl.426/

[-] circle@lemmy.world 2 points 1 year ago

sure, thank you!

[-] circle@lemmy.world 2 points 1 year ago

Thanks. Does this also conduct compute benchmarks too? Looks like this is more focused on model accuracy (if I'm not wrong)

4

As the title suggests, basically i have a few LLM models and wanted to see how they perform with different hardware (Cpus only instances, gpus - t4, v100, a100). Ideally it's to get an idea on the performance and overall price(vm hourly rate/ efficiency)

Currently I've written a script to calculate ms per token, ram usage(memory profiler), total time taken.

Wanted to check if there are better methods or tools. Thanks!

[-] circle@lemmy.world 3 points 1 year ago

I already miss my muscle memory operations through sync :/

[-] circle@lemmy.world 2 points 1 year ago

Can't wait!

circle

joined 1 year ago