Reddit cfb was fun, but I'm not going back this season. I'll try to post some here some once the season starts. Particularly whenever the gators lose.
Go Vols
Reddit cfb was fun, but I'm not going back this season. I'll try to post some here some once the season starts. Particularly whenever the gators lose.
Go Vols
I would defend Tennessee at least partially. Tennessee is home to the Oak Ridge National Laboratory which houses the world's second largest supercomputer (Frontier). Scientists from across the world to come work there every year. These scientists are woefully out numbered by everyone else like you said, but some of the brightest minds are in fact in Tennessee.
This is a strong argument. One of my main complaints with modern large companies is the need to operate for short term gains long term losses, so point number 3 sounds amazing to me. Does this mean Intel would no longer be a publicly traded company, but a US Government owned company, something similar to the USPS?
I'm uninformed on this topic, perhaps you or someone else can teach me a bit more on this. What would the argument be for bailing them out, and what would be the argument for letting them fail? Without any knowledge of the consequences of either, I feel like letting the business fail is what we should do. We let businesses fail all the time, especially small ones. Why should we bail out this business when we let other fail all the time?
It feels like the core concern is letting that many people all lose their job at the same time would be particularly challenging issue for the people affected. But these numbers are far less than the number that have been laid off recently by other companies. The government didn't step in to help those people or companies performing massive layoffs, why bailout this company? I don't know, but would like to hear arguments for both
As a foster parent, we get trained on how kids frequently get trafficked and the number one place is anywhere parents feel their kids are safe and don't need close supervision. So anything kid centric like Disney World or family centric like a church are prime targets for predators. Roblox is a kid centric place where parents don't closely watch their kids.
Roblox is a big enough company and has been around long enough that they should be doing something. They should be doing something because they definitely know this happens at this point. If you believe everything they claim on their website is true: https://corp.roblox.com/resource/child-safety, they are doing something. As far as I can tell, there isn't a report or any way to validate they are actually doing anything. You just have to trust that the publicly traded company is investing in a department that doesn't directly generate profits for its stock holders. You have to trust this company is not giving in to pressure each quarter to increase profits and decrease costs around this function of their business.
I push back on the idea that if something is designed for kids it doesn't need to be safe for kids. Roblox has designed something for kids, they should do something to make it safe for kids, and parents should watch their kids.
I don't doubt that it can perform addition in multiple ways. I would go as far as saying it can probably attempt to perform addition in more ways than the average person as it has probably been trained on a bunch of math. Can it perform it correctly? Sometimes. That's ok, people make mistakes all the time too. I don't take away from LLMs just because they make mistakes. The ability to do math in multiple ways is not evidence of thinking though. That is evidence that it's been trained on at least a fair bit of math. I would say if you train it on a lot of math, it will attempt to do a lot of math. That's not thinking, that's just increasing the weighting on tokens related to math. If you were to train an LLM on nothing but math and texts about math, then asked it an art question, it would respond somewhat nonsensically with math. That's not thinking, that's just choosing the statistically most likely next token.
I had no idea about artificial neurons, TIL. I suppose that makes "neural networks" make more sense. In my readings on ML they always seemed to go straight to the tensor and overlook the neuron. They would go over the functions to help populate the weights but never used that term. Now I know.
I think you can make a strong argument that they don't think rooted in words should mean something and that statistics and thinking don't mean the same thing. To me, that feels like a fairly valid argument.
Is the argument that LLMs are thinking because they make guesses when they don't know things combined with no provided quantity or quality to describe thinking?
If so, I would suggest that the word "guessing" is doing a lot of heavy lifting here. The real question would be "is statistics guessing"? I would say guessing and statistics are not the same thing, and Oxford would agree. An LLM just grabs tokens based on training data on what word or token most likely comes next, it will just be using what the statistically most likely next token or word is. I don't think grabbing the highest likely next token counts as guessing. That feels very algorithmic and statistical to me. It is also possible I'm missing the argument still.
If the LLM could reason, shouldn't it be able to say "my token training prevents me from understanding the question as asked. I don't know how many 'r's there are in Strawberry, and I don't have a means of finding that answer"? Or at least something similar right? If I asked you what some word in a language you didn't know, you should be able to say "I don't know that word or language". You may be able to give me all sorts of reasons why you don't know it, and that's all fine. But you would be aware that you don't know and would be able to say "I don't know".
If I understand you correctly, you're saying the LLM gets it wrong because it doesn't know or understand that words are built from letters because all it knows are tokens. I'm saying that's fine, but it should be able to reason that it doesn't know the answer, and say that. I assert that it doesn't know that it doesn't know what letters are, because it is incapable of coming to that judgement about its own knowledge and limitations.
Being able to say what you know and what you don't know are critical to being able to solve logic problems. Knowing which information is missing and can be derived from known things, and which cannot be derived is key to problem solving based on reason. I still assert that LLMs cannot reason.
I don't think the current common implementation of AI systems are "thinking" and I'll base my argument on Oxford's definitions of words. Thinking is defined as "the process of using one's mind to consider or reason about something". I'll ignore the word "mind" and focus on the word "reason". I don't think what AIs are doing counts as reasoning as defined by Oxford. Let's go to that definition: "the power of the mind to think, understand, and form judgments by a process of logic". I take issue with the assertion that they form judgments. For completeness, but I don't think it's definition is particularly relevant here, a judgment is: "the ability to make considered decisions or come to sensible conclusions".
I think when you ask an LLM how many 'r's there are in Strawberry and questions along this line you can see they can't form judgments. These basic but obscure questions are where you see that the ability to form judgements isn't there. I would also add that if you "form judgments" you probably don't need to be reminded you formed a judgment immediately after forming one. Like if I ask an LLM a question, and it provides an answer, I can convince it that it was wrong whether or not I'm making junk up or not. I can tell it it made a mistake and it will blindly change it's answer whether it made a mistake or not. That also doesn't feel like it's able to reason or make judgments.
This is where all the hype falls flat for me. It feels like sometimes it looks like a concrete wall, but occasionally that concrete wall is made of wet paper. You can see how impressive the tool is and how paper thin it is at the same time. It's cool, it's useful, it's fake, and that's ok. Just be aware of what the tool is.
If you think about it, this was perhaps the most humane way to conduct war. No humans were harmed in this attack, and the ability to harm humans was severely degraded. You had drones smash into unmanned airplanes. Nothing but money and hardware was lost. This is the utopian version of war if such a thing could ever exist. One country removes another country's ability to harm humans with nobody getting hurt and everyone gets to go home.
Billy Napier wears jorts!