[-] nulldev@lemmy.vepta.org 6 points 1 year ago

Get any equalizer app (e.g. Poweramp Equalizer).

[-] nulldev@lemmy.vepta.org 7 points 1 year ago

BTW that still uses Google's proprietary gesture typing library internally: https://github.com/wordmage/openboard/commit/46fdf2b550035ca69299ce312fa158e7ade36967

There's still no good FOSS alternative to Google's library though so it is what it is.

[-] nulldev@lemmy.vepta.org 15 points 1 year ago

JPEG XL came after WebP. It's more of a successor and less of a competitor.

That said, in the world of standards, a successor is still a competitor.

[-] nulldev@lemmy.vepta.org 13 points 1 year ago

No, you want it for scrolling. Scrolling feels much more responsive at 120Hz. It does drain battery more but not by enough to be a deal breaker for most people.

It's useless for videos as most videos are 60Hz.

[-] nulldev@lemmy.vepta.org 6 points 1 year ago

it just predicts the next word out of likely candidates based on the previous words

An entity that can consistently predict the next word of any conversation, book, news article with extremely high accuracy is quite literally a god because it can effectively predict the future. So it is not surprising to me that GPT's performance is not consistent.

It won't even know it's written itself into a corner

It many cases it does. For example, if GPT gives you a wrong answer, you can often just send an empty message (single space) and GPT will say something like: "Looks like my previous answer was incorrect, let me try again: blah blah blah".

And until we get a new approach to LLM's, we can only improve it by adding more training data and more layers allowing it to pick out more subtle patterns in larger amounts of data.

This says nothing. You are effectively saying: "Until we can find a new approach, we can only expand on the existing approach" which is obvious.

But new approaches come all the time! Advances in tokenization come all the time. Every week there is a new paper with a new model architecture. We are not stuck in some sort of hole.

[-] nulldev@lemmy.vepta.org 8 points 1 year ago

LLMs can't critique their own work

In many cases they can. This is commonly used to improve their performance: https://arxiv.org/abs/2303.11366

[-] nulldev@lemmy.vepta.org 9 points 1 year ago* (last edited 1 year ago)

What are you talking about? The issue to bring back captchas was only opened 4 days ago!

Captchas were only removed 2 weeks ago, no one spoke up then: https://github.com/LemmyNet/lemmy/issues/2922

The developers have nothing against captchas. They were the ones who originally built and added the feature: https://github.com/LemmyNet/lemmy/pull/1027

[-] nulldev@lemmy.vepta.org 7 points 1 year ago

I think there have been some API changes so you need both the new backend and the new frontend.

[-] nulldev@lemmy.vepta.org 16 points 1 year ago

My bad, I have Bypass Paywalls Clean so I didn't even notice the paywall!

224
63
[-] nulldev@lemmy.vepta.org 75 points 1 year ago* (last edited 1 year ago)

How in the world does setting a bunch of subs to private crash the website?

[-] nulldev@lemmy.vepta.org 8 points 1 year ago* (last edited 1 year ago)

Doesn't Reddit have multireddits? Lemmy can implement the same feature.

OP mentioned this.

[-] nulldev@lemmy.vepta.org 7 points 1 year ago

I don't think this is a problem. It's the same on reddit where you can have multiple gaming subreddits or multiple news subreddits. Eventually the communities will consolidate.

view more: next ›

nulldev

joined 1 year ago