So that's the thing. People say that they'll never retire and that it sounds boring, but the reality is much different. You just find other things to do. What you'll find is that when you stop working for someone else, you start working for yourself... and if you're a determined individual you'll be busier than you've ever been in your life. Just something to consider.
Somehow I don't think the Quest 3 is going to be a problem. The battery only lasts a couple hours, and you look dumb as hell wearing it in public. Unless the point is to look dumb as hell in public, then mission accomplished.
Regardless of what anyone says, I think this is actually a pretty good use case of the technology. The specific verbiage of a review isn't necessarily important, and ideas can still be communicated clearly if tools are used appropriately.
If you ask a tool like ChatGPT to write "A performance review for a construction worker named Bob who could improve on his finer carpentry work and who is delightful to be around because if his enthusiasm for building. Make it one page." The output can still be meaningful, and communicate relevant ideas.
I'm just going to take a page from William Edwards Deming here, and state that an employee is largely unable to change the system that they work in, and as such individual performance reviews have limited value. Even if an employee could change the system that they work in, this should be interpreted as the organization having a singular point of failure.
Depends on what you do. I personally use LLMs to write preliminary code and do cheap world building for d&d. Saves me a ton of time. My brother uses it at a medium-sized business to write performance evaluations... which is actually funny to see how his queries are set up. It's basically the employee's name, job title, and three descriptors. He can do in 20 minutes what used to take him all day.
Gonna just buck the trend and say that this AI push has me excited for the future. It's easy to be a nay-sayer, but I genuinely believe the leaps made in AI in just the last year are amazing.
The author clearly doesn't like AI, and completely mischaracterizes Mistral AI for things their models could say, but doesn't consider at all why unaligned models are useful in developing your own.
The author likes to highlight that sometimes an AI will make things up, a phenomenon known as hallucinating. Hallucinations could also be called "creativity" in certain contexts. This isn't always a fault, especially when creativity is the intended purpose.
The author pointed out how it's possible to prompt engineer out sensitive data, and how there's a lack of privacy... which isn't a problem with the tech, but rather tech companies.
The technology used behind the scenes with ChatGPT isn't exclusively for text generation. I'm seeing it appear in speech to text / text to speech applications. It's showing up in image and video editing. It's showing up in ... well ... images/movies of an adult nature.
You're probably already consuming AI generated content without even realizing it.
There's a ton of stuff ChatGPT won't answer, which is supremely annoying.
I've tried making Dungeons and Dragons scenarios with it, and it will simply refuse to describe violence. Pretty much a full stop.
Open AI is also a complete prude about nudity, so Eilistraee (Drow godess that dances with a sword) just isn't an option for their image generation. Text generation will try to avoid nudity, but also stop short of directly addressing it.
Sarcasm is, for the most part, very difficult to do... If ChatGPT thinks what you're trying to write is mean-spirited, it just won't do it. However, delusional/magical thinking is actually acceptable. Try asking ChatGPT how licking stamps will give you better body positivity, and it's fine, and often unintentionally very funny.
There's plenty of topics that LLMs are overly sensitive about, and uncensored models largely correct that. I'm running Wizard 30B uncensored locally, and ChatGPT for everything else. I'd like to think I'm not a weirdo, I just like D&d... a lot, lol... and even with my use case I'm bumping my head on some of the censorship issues with LLMs.
I actually did ask my Doctor about why this happens once. Mainly it's because if a patient before you has something that needs more time it messes up the schedule for every patient after... and this happens every single day. If no one cancels their appointments, then this problem just continually compounds throughout the day. The best bet to being seen on time is to be the first patient of the day.
Or just intentionally show up a few minutes late and take the mild scolding from the receptionist. It's not like they're going to turn ya away
Despite being 4 years old it's still one of the better options, though with caveats. The one thing that it has that nothing else really has is real time AI upscaling. I've stopped using my Shield, and went back to using Roku boxes and Raspberry Pi 4B's... so it's hard for me to really recommend the Shield.
Nvidia has pretty much abandoned GeForce Experience, so despite this being a selling point for the device, you'd be happier using Moonlight + Sunshine even if you did buy a Shield. The Nvidia Shield also has terrible input lag for bluetooth controllers. I think this because of how Android blocks direct access to hardware, and so it introduces input lag. So if you actually want to use GeForce Experience, it means you'd have to buy and 8bitdo USB stick, or pay for VirtualHere to fix the controller problem. I personally setup a Raspberry Pi 4B with Moonlight and I'm much happier with that.
For Plex, I'd be hesitant. Over on Reddit I keep reading about how people have attempted using the Shield to run their home media, and it's usually followed with regret. I didn't get into streaming locally until after I stopped using my Shield, though so I can't personally attest to that. Instead, I'm using a second Pi to run a NAS and Jellyfin... and again, the Shield might be preferable if you want everything in one unit.
So, I can't exactly recommend the Nvidia Shield... but at the same time I don't think most people would have the time to build their own Raspberry Pi based solutions either.
I spent a bunch of time in the meat grinder.
Your bosses, and you'll have many, will be dumb. Your peers will have egos, and when they finally get a promotion (probably one that doesn't include a pay increase), they'll go out of their way to stifle any creative control you might have. By the time you burn out you'll probably find that your 20's are gone, you have no 'management' experience, and companies that are hiring are only looking for 'junior programmers with 10 years experience'. Then there's ChatGPT... which I've literally heard a manager, a guy that couldn't figure out how to open a PDF to save his life, say 'why do we need developers if ChatGPT can write code?' That's a whole new thing that's happening now that I'm not sticking around for.
I personally branched out of programming and got actual experience in Electrical Engineering, and am working on a business degree. Life is better now. I get to touch grass.
I'm telling you this, because after years of working in the industry I can tell you exactly why programmers get paid six figure salaries. You have to sit in one spot for 40-60 hours a week thinking about and solving puzzles that other people just don't want to. Few people can do this. I'm not kidding when I say I most of my coworkers have some kind of autism or an Adderall addiction. And your bosses won't appreciate what you do, because they simply won't understand it.
In a sincere desire to help you not make the mistakes I made, consider front-loading any additional education you might want. Don't put it off. Push back on working additional unpaid hours. Don't go in on weekends, or work additional hours. A promotion or pay raise only exists if you have it in your hand. The people at work aren't your friends, and you don't owe them anything. You deserve respect.
I don't mean to scare you off from the field. It has been highly rewarding, and I still love working with computers, but burnout is a very real thing.
Anyone living or dead? Definitely dead. I think I could reliably win a fight against a dead guy.
Certainly can be. Falls that lead to permanent disability, or loss of function are "sentinel events." Hip fractures are really a bad sign specifically. Like, so bad that you gotta square away wills and end of life care.
Lol... I just read the paper, and Dr Zhao actually just wrote a research paper on why it's actually legally OK to use images to train AI. Hear me out...
He changes the 'style' of input images to corrupt the ability of image generators to mimic them, and even shows that the super majority of artists even can't tell when this happens with his program, Glaze... Style is explicitly not copywriteable in US case law, and so he just provided evidence that the data OpenAI and others use to generate images is transformative which would legally mean that it falls under fair use.
No idea if this would actually get argued in court, but it certainly doesn't support the idea that these image generators are stealing actual artwork.