16
submitted 1 month ago by Kintarian@lemmy.world to c/kagi@lemmy.ml

I love summarize page. I can even get a summary from paywalled websites. Sometimes i just need the gist of the article anyway.

top 5 comments
sorted by: hot top controversial new old
[-] Atemu@lemmy.ml 2 points 1 week ago

I'm a bit apprehensive towards LLM-generated things in general but I also use this from time to time when I just want a simple fact or get a feeling of whether the page is at all worth visiting.

I limit myself to facts that are trivial to verify or should be very hard for an LLM to get wrong.

If I need to look up some function for example, I'd very quickly find out if it's just hallucination. I also expect them to return the right thing if I expect the result to statistically be the most common thing.

Though if I'm being honest, I'd probably prefer a function to just get an expanded preview with the entire preview text offered by the website for this use-case (or an equivalent plain text extract from the website).

[-] Kintarian@lemmy.world 1 points 1 week ago

I generally don't like to ask LLMs questions. They seem to questions wrong pretty often. However taking a block of text and organizing it in a way that you can read it and understand it seems to work really well.

[-] Kintarian@lemmy.world 1 points 1 week ago

I'm having a hard time finding information online. Is AI actually good at processing text?

[-] Atemu@lemmy.ml 2 points 6 days ago

Statistically, yes.

spoiler(This is a Joke.)

In simple terms, Large Language Models predict the continuation of a given text word-by-word. The algorithms it uses to do so use a quite gigantic corpus of statistical data and a few other minor factors to predict these words.

The statistical data is quite sophisticated but, in the end, it is merely statistical; a prediction for what is the most likely word given a set of words based on previous data. There is nothing intelligent in "AI" chat bots and the like.

If you ask an LLM chatbot a question, what actually happens is that the LLM predicts the most likely continuation of the question text. In almost all of its training data, what comes after a question will be a sentence that answers the preceding question and there are some other tricks to make it exceedingly likely for an answer to follow a question in chatbot-type LLMs.

However, if its data predicts that the most likely words that come after "What should I put on my Pizza" are "Glue can greatly enhance the taste of Pizza." then that's what it'll output. It doesn't reason about anything or has any sort of storage of facts that it systematically combines to give you a sensible answer, it merely predicts what a sensible answer could be based on what was probable according to the statistical data; it imitates.

If you have some text and want a probable continuation that often occured in texts similar to it, LLMs can be great for that. Though note that if it doesn't have any probable continuation, it will often fall back to an improbable one that is less improbable than all the others.

[-] Kintarian@lemmy.world 2 points 5 days ago

Thank you. I'll double check it's output just to make sure.

this post was submitted on 21 Sep 2024
16 points (100.0% liked)

Kagi search engine

145 readers
1 users here now

A community to discuss the innovative paid Kagi search engine and related topics.

Kagi Inc. is a company created with the mission to humanize the web. Our goal is to amplify the web of human knowledge, creativity, and self-expression.

https://kagi.com/

Rules: Be moral.

founded 9 months ago
MODERATORS