Not even -- it's a simplified Civilization clone for mobile. (It actually sounds like a pretty neat little game, but, uh, chess it is not!)
heck yeah I love ~~Physics Jenny Nicholson~~ Angela Collier
Thank god I can have a button on my mouse to open ChatGPT in Windows. It was so hard to open it with only the button in the taskbar, the start menu entry, the toolbar button in every piece of Microsoft software, the auto-completion in browser text fields, the website, the mobile app, the chatbot in Microsoft's search engine, the chatbot in Microsoft's chat software, and the button on the keyboard.
I mean they do throw up a lot of legal garbage at you when you set stuff up, I'm pretty sure you technically do have to agree to a bunch of EULAs before you can use your phone.
I have to wonder though if the fact Google is generating this text themselves rather than just showing text from other sources means they might actually have to face some consequences in cases where the information they provide ends up hurting people. Like, does Section 230 protect websites from the consequences of just outright lying to their users? And if so, um... why does it do that?
Even if a computer generated the text, I feel like there ought to be some recourse there, because the alternative seems bad. I don't actually know anything about the law, though.
yeah, I definitely think machine learning has obvious use cases to benefit the common good (youtube auto captions being Actually Pretty Decent Now is one that comes to mind easily) but I'm much less certain about most of the stuff being presently marketed as "AI"
The problem is just transparency, you see -- if they could just show people the math that led them to determining that this would save X million more lives, then everyone would realize that it was actually a very good and sensible decision!
yeah, my first thought was, what if you want to comment out code in this future? does that just not work anymore? lol
I definitely think the youths are stressed because of 'environmental pollution,' but not in the way this commenter means...
This is good! Though, he neglects to mention the group of people (including myself) who have yet to be sold on ai's usefulness at all (all critics of practical AI harms are lumped under 'reformers' implying they still see it as valuable but just currently misguided.)
Like, ok, so what if China develops it first? Now they can... generate more convincing spam, write software slightly faster with more bugs, and starve all their artists to death? ... Oh no, we'd better hurry up and compete with that!
I had the same thought as Emily Bender's first one there, lol. The map is interesting to me, but mostly as a demonstration of how anglosphere-centric these models are!
The industry is still learning how to even use the tech.
Just like blockchain, right? That killer app's coming any day now!
This is kind of the central issue to me honestly. I'm not a lawyer, just a (non-professional) artist, but it seems to me like "using artistic works without permission of the original creators in order to create commercial content that directly competes with and destroys the market for the original work" is extremely not fair use. In fact it's kind of a prototypically unfair use.
Meanwhile Midjourney and OpenAI are over here like "uhh, no copyright infringement intended!!!" as though "fair use" is a magic word you say that makes the thing you're doing suddenly okay. They don't seem to have very solid arguments justifying them other than "AI learns like a person!" (false) and "well google books did something that's not really the same at all that one time".
I dunno, I know that legally we don't know which way this is going to go, because the ai people presumably have very good lawyers, but something about the way everyone seems to frame this as "oh, both sides have good points! who will turn out to be right in the end!" really bugs me for some reason. Like, it seems to me that there's a notable asymmetry here!