What we have done is invented massive, automatic, no holds barred pattern recognition machines. LLMs use detected patterns in text to respond to questions. Image recognition is pattern recognition, with some of those patterns named things (like "cat", or "book"). Image generation is a little different, but basically just flips the image recognition on its head, and edits images to look more like the patterns that it was taught to recognize.
This can all do some cool stuff. There are some very helpful outcomes. It's also (automatically, ruthlessly, and unknowingly) internalizing biases, preferences, attitudes and behaviors from the billion plus humans on the internet, and perpetuating them in all sorts of ways, some of which we don't even know to look for.
This makes its potential applications in medicine rather terrifying. Do thousands of doctors all think women are lying about their symptoms? Well, now your AI does too. Do thousands of doctors suggest more expensive treatments for some groups, and less expensive for others? AI can find that pattern.
This is also true in law (I know there's supposed to be no systemic bias in our court systems, but AI can find those patterns, too), engineering (any guesses how human engineers change their safety practices based on the area a bridge or dam will be installed in? AI will find out for us), etc, etc.
The thing that makes AI bad for some use cases is that it never knows which patterns it is supposed to find, and which ones it isn't supposed to find. Until we have better tools to tell it not to notice some of these things, and to scrub away a lot of the randomness that's left behind inside popular models, there's severe constraints on what it should be doing.
engineering (any guesses how human engineers change their safety practices based on the area a bridge or dam will be installed in? AI will find out for us), etc, etc.
I haven't seen that. Sent the same pieces of infrastructure equipment to pure 3rd world to super wealthy cities. Not saying it doesn't exist but I personally haven't seen it. You know a lot of these specs are just recycled. To the extent that I have even seen the same page references across countries. Was looking at a spec a few days ago for Mass. that was word for word identical to one I had seen in a city in British Columbia.
In terms of biases among P.E.s what I have seen is a preference for not doing engineering and instead getting more hours. Take the same design you inherited in the 80s, make cosmetic changes, generate more paperwork (oh this one part now can't be TBD it has to be this specific brand and I need catalog cuts/datasheet/certs/3 suppliers/batch and lot numbers and serial numbers/hardcopy of the CAD of it...). So I imagine a LLM trained on these guys (yes they are always guys) would know how to make project submittals/deliverables longer and more complex while feeling the urge to conduct more stakeholder meetings via RFIs.
Sorry don't mean to be bitter. I have one now demanding I replicate an exact copy of a control system from the early 80s with the same parts and they did not like when I told them that the parts are only available on eBay.
Please god I hope so. I don't see a path to anything significantly more powerful than current models in this paradigm. ANNs like these have existed forever and have always behaved the way current LLMs do, they just recently were able to make them run somewhat more efficiently with bigger context windows and training sets which birthed GPT3 which was further minimally tweaked into 3.5 and 4 among others. This feels a whole lot like a local maxima where anything better will have to go back down through another several development cycles before it surpasses the current gen.
I think GPT5 will be eye opening. If it is another big leap ahead then we are not in this local maxima, if it is a minor improvement then we may be in a local maxima.
Likely then the focus will shift to reducing hardward requirements for inforarance(?) Allowing bigger models to run better on smaller hardware
Not trying to be a gatekeeper, but is this blog even worth sharing?
My name’s Ed, I’m the CEO of national Media Relations and Public Relations company EZPR, of which I am both the E (Ed) and the Z (Zitron).
There are a few reasons why the AI hype has diminished. One reason is data integrity concerns - many companies prohibit the use of ChatGPT out of fear of OpenAI training their models on confidential data.
To combat this the option is to provide LLMs that can be run “on premise”. Currently those LLMs aren’t good enough for most uses. Hopefully we will get there in time, but at this pace it seems like it’s taking longer than expected.
And here I was already to eek out an existence fixing things inside a giant AI complex for food pelts, like some sorta gut bacteria.
Meh guess I will just go back to quitely waiting for either global warming, or some easily treatable disease that my insurance won't cover, or some religious nutbag to shot me.
Author doesn't seem to understand that executives everywhere are full of bullshit and marketing and journalism everywhere is perversely incentivized to inflate claims.
But that doesn't mean the technology behind that executive, marketing, and journalism isn't game changing.
Full disclosure, I'm both well informed and undoubtedly biased as someone in the industry, but I'll share my perspective. Also, I'll use AI here the way the author does, to represent the cutting edge of Machine Learning, Generative Self-Reenforcement Learning Algorithms, and Large Language Models. Yes, AI is a marketing catch-all. But most people better understand what "AI" means, so I'll use it.
AI is capable of revolutionizing important niches in nearly every industry. This isn't really in question. There have been dozens of scientific papers and case studies proving this in healthcare, fraud prevention, physics, mathematics, and many many more.
The problem right now is one of transparency, maturity, and economics.
The biggest companies are either notoriously tight-lipped about anything they think might give them a market advantage, or notoriously slow to adopt new technologies. We know AI has been deeply integrated in the Google Search stack and in other core lines of business, for example. But with pressure to resell this AI investment to their customers via the Gemini offering, we're very unlikely to see them publicly examine ROI anytime soon. The same story is playing out at nearly every company with the technical chops and cash to invest.
As far as maturity, AI is growing by astronomical leaps each year, as mathematicians and computer scientists discover better ways to do even the simplest steps in an AI. Hell, the groundbreaking papers that are literally the cornerstone of every single commercial AI right now are "Attention is All You Need" (2017) and
"Retrieval-Augmented Generation for Knowledge -Intensive NLP Tasks" (2020). Moving from a scientific paper to production generally takes more than a decade in most industries. The fact that we're publishing new techniques today and pushing to prod a scant few months later should give you an idea of the breakneck speed the industry is going at right now.
And finally, economically, building, training, and running a new AI oriented towards either specific or general tasks is horrendously expensive. One of the biggest breakthroughs we've had with AI is realizing the accuracy plateau we hit in the early 2000s was largely limited by data scale and quality. Fixing these issues at a scale large enough to make a useful model uses insane amounts of hardware and energy, and if you find a better way to do things next week, you have to start all over. Further, you need specialized programmers, mathematicians, and operations folks to build and run the code.
Long story short, start-ups are struggling to come to market with AI outside of basic applications, and of course cut-throat silicon valley does it's thing and most of these companies are either priced out, acquired, or otherwise forced out of business before bringing something to the general market.
Call the tech industry out for the slime is generally is, but the AI technology itself is extremely promising.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed