Using AI as a tool like any other is fine. It’s the blind trust that it can do everything for you that is problematic (not to mention the fact people hide the fact that something “they created” is AI). Just like with any other computer system: garbage in, garbage out.
LLMs are great at language. I often use them to generate syntax for a language I don't know and probably won't use again. While the short snippets may not do exactly what I want, I can edit a snippet fairly easily. Writing one with no knowledge of the language would take me far longer.
Just remember that being good at language is not the same as intelligence. LLMs are good at mimicking thought, but they cannot problem solve or optimise. You still have to do that bit.
I sometimes use LLM's to help me troubleshoot. I usually don't ask for solutions, but rather "what is wrong here?" type stuff.
has often saved me hours of troubleshooting, but it is occasionally wrong and sees flaws where there is none.
I fully agree as a tool LLMs are amazing. Throw in a config file or code that you know 99% of what it should be, but can't find what's wrong... and I'd say there's a good 70% chance it will find it... maybe chasing down one or 2 red herrings before it solves it.
The bad rap of course is simply the 2 main factors.
-
idiots that use it to do the entire coding, and thus wind up with something they themselves don't have even the basic understanding of how it goes together, so they can't spot when it does something horrifically wrong.
-
The overall reality that, no matter how you slice it, it costs an absurd amount to run these things. so.. while the AI companies are letting us use these things for free or off really cheap plans, it's actually costing real money to process, and realistically there's no sign of it reaching a point where there's actually a fair trade of value...
The right tool for the right job... LLMs can't do a lot, and can make a lot of things worse when misapplied, but that doesn't mean the technology is wholly useless.
AI is best used for prompting and troubleshooting when it comes to code works, imo. It can give ideas, find a small bug, or just help get it of a corner. I never use the code generated but instead at least type out what I do want to use, both so I'm sure I know what it's doing, and to not atrophy my skills.
Sure there are plenty of cases where an LLM actually helps out. They have a lot of wins, otherwise there wouldn't be any hype around it at all.
The issue is they have a lot of limitations and caveats and with the marketing machine behind it, those get pretty much ignored. There is also the issue they use a bunch of energy and clean water, which aren't the most abundant things we have. Then there is the issue of them using stolen data, without any attribution or way of repaying the people who put time and effort into it. And the extra issue of LLM tools being used instead of search engines, which leads to a drop in visitors of the sites that provided the content in the first place. This then leads to those sites shutting down because of higher costs due to heavy AI crawler traffic, combined with fewer actual users. And the other side of this coin where LLM tools are used to create more content, which leads to the dead internet and model collapse due to new models being trained on older model output.
Another issue is not if they have wins and losses, but if it's on average worth it. In my experience there are definitely cases where a 4 hour task takes 20 min with the help of an LLM and that feels good. But then there might be many more cases where a 4 hour task takes 8 hours because of fighting with the tools, wrong output needing to be corrected, things being subtly wrong where it takes bit of doing to figure out what is wrong. So depending on what you are trying to do, one might on average be less efficient working with an LLM than without. It might also increase cognitive load, because you aren't just figuring out what the answer should be, you are also trying to understand the result the LLM produced and if that matches what it should be. So it might be more efficient, but require more mental energy. The flipside of this is quality going down because people aren't able to handle the extra load and just accept the result given, even if this result isn't very good.
Big issue is that it erodes skills as well, with people becoming more dependent on the tools and losing their own abilities because they aren't practising them regularly like they used to. So once the limitations of the LLM are hit, there is nobody left who can do better. This leads to another issue with jobs as well, as the LLM tools become more capable of doing the easy stuff, there is suddenly this gap between somebody with no experience and somebody with a lot of experience with no way of bridging that gap. Usually we let the rookies gain experience on the simpler stuff, to get their feet under themselves, train their skills and become good at their job. If the rookies can't get a job, because the LLM has that job, then what do we do?
These are more practical now issues, but there are also more long term meta issues.
Like for example with the amount of money it takes to develop and run these AI tools, there can only be a few big players in this market. If we push over our jobs to these tools, we are also pushing over more and more control to these few big players. Without the right legislation and oversight, this is a pretty bad idea. And from the way it's currently looking these few big players will be US based companies, and the US doesn't have the best track record or outlook on things like regulation and oversight in cases like this. And for non-US countries they might not want to put US based countries in control like that.
Another more meta issue is the importance of chip design, development and production. With these AI tools this becomes a huge arms race for the entire world. Everyone needs to keep improving on new chips, keep buying the latest and greatest or risk becoming obsolete. We will pump more and more money into this, until entire countries will become bankrupt. It's comparable to the space race in the previous century where both the US and Russia were dumping huge amounts of money into the race. But in that case there was a real goal, getting people to the moon and getting them safely back. With AI chips, there is no goal, it's a never ending race, a pit without a bottom for us to put all our efforts into. We see this today where companies like OpenAI lobby the US government to give them bail outs when they run out of money, otherwise they will lose ground on China.
Then there's the social economics issue. If so many jobs are pushed to AI tools and get cut, what are those people going to do? In most places we have some social fallback mechanisms in place, but the US for example has very few (and more being cut every day). And those systems can only handle so much, if unemployment hits 10% we're in trouble, but what if it goes up to 30%? Or even 50%? That would not only crash economies, but leave a whole lot of people without access to basics like water, food and shelter. In an utopian world we would use the gains made by automation to setup more social structures, like UBI for example. But I'd argue we need to set up those structures before we have all those people lose their jobs and crash the economy. A lot of foresight is needed here, which governments usually struggle with. And with the way it's currently going, all the money is flowing into the pockets of Jensen and a lot of other extremely right people. Exactly the people that don't have to ever worry about not having any money are the people who are getting the money. And I feel with the people in control right now, there is very little incentive to change this.
There's also this issue where pushing costs of production down leads to more consumerism as opposed to less. Climate change is a huge problem with many complications wrapped within. But overal I think most people agree consuming less is one of the few ways we have to combat this. That means using less energy and being more efficient with our energy. But also being less wasteful, don't create things that are made to throw away. Use tools we can use for a long time and not replace all the time. Over the past 50 years capitalism has pushed us into the consumer economy, where we need to keep consuming more in order for the economy to work. That means products are made to be replaced or even if they can go for longer, advertisement and "fashion" pushes us to replace stuff anyways. When production of goods becomes cheaper due to more automation being employed, this might get supercharged where we consume even more and thus accelerate the destruction of our habitable climate. Now this argument can go both ways, because it can also provide cheaper water, food and shelter, which can lead to greater access to those things. But somehow I feel we will end up with a new phone every month, before we actually give people who need it water, food and shelter.
There's also the issue we might actually hit some form of AGI or something, just smart enough to trigger the singularity. In that case that's an extinction level event, humanity is just done and over with. Best case scenario the new AI species thinks we smell bad and just leaves, but there are a lot of worse scenarios than that one. And even if the AI doesn't actually reach AGI levels, the paperclip problem is a real risk. Now I feel like with the current tools we have, the risk of this is very small, but it's worth considering.
I'm sure there's issues I've missed as well and it's hard to say how much of these issues are real big issues, or more of a small manageable thing. I feel like all of them are show-stoppers, but as the world pushes on I might be in the minority on that. I might be the luddite that pushes against new technology as many have done in the past and who knows, it might all turn out to be a good thing. But my personal view is, we are pretty much fucked.
homeassistant
Home Assistant is open source home automation that puts local control and privacy first.
Powered by a worldwide community of tinkerers and DIY enthusiasts.
Home Assistant can be self-installed on ProxMox, Raspberry Pi, or even purchased pre-installed: Home Assistant: Installation
Discussion of Home-Assistant adjacent topics is absolutely fine, within reason.
If you're not sure, DM @GreatAlbatross@feddit.uk