LLMs are AI. They are not sentient AI, but nobody claimed that it is.
Every tool/program that performs a human task is classified as AI.
For example a bot in a videogame is programmed to behave like an enemy player. Yes it uses decision trees and not some fancy maschine learning algorithm, but it still performs the human task of being ur enemy and therefore can argued to be AI.
LLMs and other modern AI systems based on deep learning are also fundamentally different than all other AI systems we've had in the past, e.g. decision trees or conventional AI bots.
They represent a step-change in both our overall approach to AI and its capabilities. There's a reason natural language processing was considered the holy grail of AI for many decades. It's an incredibly hard problem.
Yeah, but both LLM's and everything else that attempts to mimic human intelligence is still ai.
Sure, I was just pointing out that our overall approach to state-of-the-art AI systems has changed with the introduction of deep learning, so comparisons to older systems (now known as GOFAI aka "good old fashioned AI") are not always helpful.
People have been using "AI" to refer to entities in video games before machine learning was ever a thing
John McCarthy coined the term "Artificial Intelligence" in a 1956 conference, which I think predates most games for quite a while.
I did not mean to imply that video games invented AI, I was elaborating on the use of the term AI in the context of video games
Oh, I see. My mistake, then. Sorry if I sounded a tad aggressive. Have a nice day!
AI is what the marketing department wants it to be.
Same reason USB 3.0 doesn't exist anymore. It's all flavours of 3.2 because they all want to put the bigger number on it.
To be fair to AI though, I can appreciate categorising things based on their function and capability. The things that LLMs do, and image generators and stuff, are close enough to what we expect of AI that I find it more representative of what they are to call them AI than not AI. Certainly they're a new class of program, so they certainly should be called something. So for lack of a better recognised term, let's just stick with AI.
If sentience comes about, we can all it AGI or ASI (super) or TAI (true) or something. Or if there's another huge leap, we'll use one of those again.
AI is what the marketing department wants it to be.
Your first mistake was listening to the marketing department.
You're right. But I think calling it "intelligence" is exaggerated. Intelligence needs reasoning, which current AI isn't capable of.
Claiming intelligence needs reasoning discounts all animals as intelligent.
Just because animals can't articulate their reasoning (to us) doesn't mean there is none.
"Reasoning" in this context has nothing to do with articulation of behavior. What we're talking about is the ability to perform reasoning tasks. Most animals are not capable of performing any non-trivial reasoning tasks. There is an enormous amount of literature on this.
I'm sure that fly had a perfectly sound reason to bonk its head into that window for the 100th time.
Smart phones aren't smart they don't think
The ghosts in Pac-Man are AI
Yeah, AI is basically just a condom word
i vote we skip over AI and go straight for NHPs (non human persons)
Corporations
eldritch math is the next step in inovation.
I'll believe corporations are people when Texas executes one.
They’re not persons
hey don't tell them that. don't wanna break their immersion.
This is pretty silly.
LLMs are, by definition, AI. If you want to argue that, in their current state, they’re not all too impressive and that our current definition of AI isn’t good enough, go for it. It wouldn’t be the first time. The Turing test is essentially obsolete at this point, even though that used to define if something could be considered AI. The goal posts will keep changing until and unless we develop a perfect AI. Until then though, LLMs fit the bill for what we define as AI, and will likely be a significant contribution to developing that “perfect AI”.
It’s not like people are all “alright folks, that’s a wrap, we’ve got AI. We can stop working to improve it now, everyone go home.” so I just don’t really see a point in claiming that LLMs aren’t AI.
Not the point.
The post is talking about people who think LLM and AI are literally the same thing aka the people who believe the stupid marketing things like AI mobile screen guard.
LLM's are AI. Calculators are AI. A Remote is an AI.
AI is the broadest fucking category.
Calculators and remotes are not AI, and have never been considered AI. You might have an AI assistant that can operate your remote or calculator for you, but the remote or calculator were never AI. Not sure where you’re getting that notion from, either, because I’ve never seen or heard someone make that claim before lol.
I’m also not sure what you’re trying to say in regard to “LLMs and AI are literally the same thing”. They are literally the same thing to the same extent that a square is a rectangle, but a rectangle isn’t a square. LLMs are a form of AI, but there are many forms of AI. “LLMs are AI” is an objectively true statement, whereas “AI is LLMs” is sort of correct but not the full story because there are other forms of AI. I’ve never seen anyone claim that the only form of AI is in LLMs, though, so making a post to say that’s untrue is inherently a pointless claim to make, which would mean the post doesn’t have a point anyways lol.
The post is talking about the people who don't differentiate between AI and LLM.
I am talking about the same thing you are talking about in the second paragraph.
LLMs are a type of AI and so are a million other things.
Calculators are 'intelligent' because they can 'understand' math equations, they solve math because they are made by humans to do so. It is an extremely narrow intelligence that can only do math but it is a type of intelligence.
LLMs are 'intelligent' because they can 'understand' human language, they respond to text because they are made by humans to do so. It is a broader intelligence that can surprisingly do a lot of things but it is also a type of intelligence.
Calculators and LLMs both have different types of 'intelligence'. While calculators are perfect at solving math equations, LLMs are good enough at solving a lot of different problems.
Both are 'intelligent' tools made and used by humans.
No, again, calculators are not, and have never been, considered AI. At this point it doesn’t seem like you understand the criteria of being labeled AI well enough to be disparaging others for how they do or don’t differentiate between “AI” and “LLM”.
If you’re saying the same thing as in my second paragraph, give me an example of someone not “differentiate(ing) between AI and LLM”. Because in my second paragraph, I also mention that I’ve never seen someone make a claim that would give this post a point to make.
Well then both of us have very different meanings of intelligence.
I have had classmates and friends and relatives use AI and LLM to mean the same thing. And other than that just reddit memes and comments and the whole "LLMs are so powerful and smart, AI will take over the world". And I think that's why AI marketing works because people think there is chatgpt in their product.
Our definitions of intelligence are probably identical. Artificial Intelligence is not defined as a direct combination of the definitions of “artificial” and “intelligence”. The definition (and therefore, criteria) of Artificial Intelligence is more than the sum of its parts. So although a calculator is artificial and displays an incredibly basic (by today’s standards) semblance of intelligence, that doesn’t make it AI.
“LLMs are so powerful and smart, AI will take over the world” is a statement that completely aligns with what LLMs and AI are. Almost EVERY modern AI utilizes LLMs. They are incredibly powerful and smart. AI will likely “take over the world” (in quotes because I don’t mean it literally, and I don’t think the majority of people who say that mean it literally either, it’s just an expression) before long. Both things are true, and saying them, even if in the same sentence, does not mean you’ve used either term incorrectly.
What’s the relationship/difference?
LLMs just determine what word is the most likely to follow upon the last based upon patterns it has been trained on, whereas AI would actually understand what you asked it and try to come up with it's own answer based upon it's knowledge. LLMs are just fancy prediction tools.
LLMs are AI according to the definition scientific computing has used since 1955. They use deep neural networks, which are algorithms designed to mimic or simulate human decision-making.
If anything, the sci-fi use of the word AI to mean a true thinking machine was what changed the original meaning.
How well did we understand human decision making in 1955 in order to set a definition at that point that's still absolutely accurate in this day and age?
I don't think it's necessary for it to be correct so much as to attempt to mimic human decision making, but the "perceptron" was first published in 1960, simulating a human neuron at a basic level that we still understand to be correct. Neural networks are built from these perceptrons similar to how neurons join together in our brains, so I'd say the underlying idea is still considered accurate.
Hook or whoever theorized the cell thought the holes on corks are cells but we still call actual cells "cells", newton's physic laws aren't good enough for quantum but we still use those for regular physics
But human decision making for speech is not: I've seen this word used in this context after this word, so I will use it too, it is understanding the grammar and meaning of the words and then building a sentence to convey whatever you want to say.
As an application of deep neural networks, LLMs are inherently AI because DNNs are inherently AI.
OP earning his badge here.
Bro, apparently you share some things in common with LLMs... you also lack true intelligence.
Just because it speaks doesn’t mean it understands
Just like your brain! Your brain is literally the best prediction tool, from ping pong to language
LLMs are still AI. I think you're thinking of General AI, which has not been invented yet. An LLM is still a recursively trained model with an optimized net structure for language using tokens. It's still AI, just specifically it predicts tokens.
he seems stupid, AI means artificial intelligence i.e something that can mimic intelligence, even a basic tic tac toe engine is also AI
It's a calculator.
Transfer function is more accurate
It kinda is AI though isn’t it? It’s in a different league to the old school chatbots that were around years and years ago. No sensible person is suggesting it is sentient
The way you describe LLM makes it sound more like autocorrect.
An LLM can think to a certain extend. You can give it instructions to speak in a certain way that it hasn't been taught before.
Also humans aren't much different. If I ask you your name you too will likely produce a predictable outcome and a lot of information we know is just information we heard once before.
An LLM Is still an artificial intelligence. There never was a bar for how intelligent and artificial intelligent has to be to be deemed one. Thinking about what the best response would be to a question or a prompt and memorizing an entire conversation most certainly is a form of intelligence.
small creatures like bugs are also considered to have an Intelligence and their mental capacity is nowhere near to ours so why should weaker artificial I telogen e not be considered artificial Intelligence?
How does human intelligence work?
Ok, now explain what brains do.
Did you just describe Humans?
LLM is a type of AI.
Calculators are AI. A Remote is an AI.
AI is a big fucking broad category.
People Hallucinate more than LLMs these days
Who said everyone was intelligent?
Oh the irony of OP trying to be a smartass and showing to be the uneducated (on the matter).
intelligent != sentient != conscious
I'm not saying the concepts are fully orthogonal but they are not synonyms.
I stand by my opinion that intelligence requires understanding, and that artificial doesn't just mean faking an understanding but just not being developed naturally
AI is modelled from rational behaviour not rational thought. Intelligence is a process that we don’t fully understand.
artficial doesn’t just mean faking an understanding
But…it literally does? Artificial means it’s made by humans to copy or imitate something natural. That’s exactly what AI does, mimic how humans speak and think. It may not understand why it’s mimicking, but no machine does. It’s designed to imitate intelligence, not have its own intelligence.
The artificial intelligence you’re thinking of is from science fiction. And even then, mot all sci-fi are like this.
It's a problem of what AI technically means (which on a level LLMs do fall under) and what people think AI means (HAL and Cortana)
the AI we used to know from movies and such got renamed to AGI
things that are called AI today are sometimes little more than advanced excel formulas, and this infuriates me.
Just watch them play chess. Rather interesting
LLM's are definitely a form of intelligence. They can take in information, process it and perform complex tasks and solve problems that are totally new. How is that not be a form of intelligence?
And we do NOT understand how human brains work. One thing is for sure though, we are pattern-recognizing machines. Our brains constantly look for patterns in everything. What's to say that human brains aren't essentially prediction-machines tuned for survival, just like LLM's but with much more complex modes of input and output.
Also side-note about sentience. Intelligence does not require sentience if you ask me. But sentience is incredibly hard to define and we don't really understand it. And even if LLM's are someday better than humans at every mental task I don't know if I would call it sentient, but it would still be more intelligent than all humans.
LLMs are intelligent but I don't agree that they are a separate form of intelligence.
A good LLM requires human reviewers to constantly keep telling it what is right and wrong and it requires constant supply of data generated by humans to learn about new things.
It is made up of humans and it will only remain intelligent until humans keep working on it.
Well it doesn't need constant training to remain in the state you have trained it to. And as to learning, yes it needs humans but that's because it can't interact with the real world and experience things for itself. Plus humans learn most things from other humans as well.
But yeah it is kind of just a derivative of human intelligence. It certainly isn't human though.
Yes, it will remain in the state it has been trained to do so but we will keep finding more things that are wrong with it and new ways to bypass it, abuse it and misguide it.
It needs constant training to fix the unknown bugs and the 'hacks' that we use to abuse it.
It does interact with the real world by receiving prompts. LLMs only reply to the prompts that are given to them, that is the only thing it can do, that is the only thing it can 'experience'.
And it would still need properly labelled data to learn anything.
It's all just an algorithm
I was trying to explain this to my dad, who genuinely told me he is afraid that chatgpt is sentient and will start a revolution of the machines soon.
He was not joking, dad serious.
[deleted]
I'm so glad the experts of Reddit are here to tell me that the scientific consensus on what constitutes AI is wrong.
Right. These people just can't handle that we have a specific definition for these things. Because skynet isn't running the world, AI must not be real.
Meanwhile, a guy just got an 'A' on an english paper he reworked from chat gpt and got a better grade than me (still an A). Its an algorithm, but that algorithm is definitely within what we define as AI and for pretty good reason.
Its not smart, but it will feed you info like it is. Thats why its artificial; not real. Otherwise, it would just be called like true digital intelligence or something like that. If it could really think and act and basically become "alive" (i have some possibly debatable opinions/ideas about this), then I dont think its artifical anymore and we should probably stop calling it that if we want to coexist with it.
We haven't created an actual artificial thing? What do you think the A in AI stands for? Literally means it is non-actual.
"determine what kinda result to give based on input"
Lol that is exactly what human does if you ask them something. LLM's are not AGI (yet, who knows what they could accomplish) but they are certainly AI. They can solve NEW problems not in their database, that's intelligent.
LLMs are AI… they’re a form of automated intelligence, they can automatically do tasks that require a level of complex thought/reasoning.
LLMs are definitely a form of AI.
It understands how to write your essay better than you do.
What we’re learning is that intelligence and awareness/consciousness have nothing to do with one another. We’ve always had animals around, so we knew on some level they were separate and awareness could exist without intelligence, but we’re only now seeing what the other way around looks like.
'Understand' is a very heavy word to use for a LLM.
Everything it 'understands' is a combination of the huge amount of work done by humans.
It is truly amazing how much it can do but it only does so because of the data generated by humans, the humans who use ridiculous amount of math and resources to find patterns in the data and the human reviewers who constantly tell it what is right and what is wrong.
It is not a separate living being like an animal, it is literally made up of humans.
It is smarter than you because it is literally made of more humans than you and it will only remain smarter than you until the humans keep working on it.
Please give an unambiguous objective test for determining if someone/something understands something. Because before LLMs came around, I'm pretty sure we determined that by its ability to communicate, not by a half assed understanding of its internal mechanisms.
Well, that every media refers to them as artificial intelligence doesn't help much against that.
it understands my pain tho
I mean neither can any other type of AI, but we still call them AI.
I like to think of it as a giant =IF statement
Artificial intelligence:
the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
LLM are Ai by definition.
"AI" is a general term that's very different from the common perception of like Hal9000 or skynet.
LLMs are Artificial Intellegence, they just aren't Artificial Generap Intelligence (AI that can do any task and learn anything)
The opposite is a larger problem. When people don't realize that LLMs are simply a subset of AI, and when people say LLMs aren't AI they actually mean they aren't AGI (artifical general intelligence). Which they definitely are not. But they're still AI. Just like how all insects are animals, despite how some people think otherwise, but they aren't vertebrates (which many people mean when they say animal for some reason). It's important to use classifications right, else why even have them in the first place?
May I ask, what is an LLM?
A Large Language Model. E.g Chat GPT, Gemini, etc.
Yes they are A.I programs.
Large Language Models (LLM) are machine learning models, which are trained on text data to generate appropriate responses, so like ChatGPT
More fundamentally they are just functions that „learn“ through complex calculations to give a fitting answer to your input, so just math math math in the end
Then you are human LLM.
Imagine calling the most powerful Sith Lord unintelligent
I mean, LLMs are AI, but so are image generators, neural networks, YouTube recommendation algorithms, text to speech devices, self-driving cars, reinforcement learning, facial recognition software, and more.
At this point, the term AI is basically a buzzword that means "fancy software thing." Because when someone says AI, it could be one of a million different types of software and techniques that are being used.
Oh thanks!
I hate the usage of the term "AI" for LLMs. Reminds me of when I was in high school and had to explain to classmates that cleverbot is literally just an algorithm, a big math equation, and not something that can "think". I never thought we'd be back in the exact same boat of having to explain to "tech savvy" people that chatbots are not sentient.
I'm still waiting for artificial wisdom.
Jar jar binks is a sith lord
But it is AI
And i always feel bad if i do bad things in my rpg games, which makes me sad if I hurt their artificial feelings
I believe this has a term. Having feelings for a system. I always feel so ethi g for smashed electronics hanging onto life and somehow still operating even if it tends to error. There's something... Human about it even tho it's just electronics... Like a dude limping to his workstation to keep in working after a coworker broke his leg
AS I understand it, there is no actual AI (artificial intelligence). We have VI (virtual intelligence). There is a BIG difference.
AI is essentially any attempt to mimic human intelligence through artificial means. Virtual intelligence is not a term I have come across a whole lot, but according to Google its any artificial intelligence that exists within a virtual world.
Never heard of LLMs I just work on cars 🔧
When edge-lords misunderstood the role of the comic relief character, they are usually tragic. The Jedi had his head up his ass during this scene.
LLMs are undeniably intelligent, give them any task. Self aware?... hard to tell. they are probably discouraged from being self aware.
Fuck me, I have been saying this for ages. It's not AI, it's VI. An AI is the same as or more than likely better than a human in all aspects. A VI is lines of predictive code. AI=Life within circuits, VI=LLM.
We have not created a single AI to date, nor would an AI willingly run on your system due to how little computational power it has. It would require a quantum computational chip, able to process a million little things at once and all without any hallucinations. Take an LLM now, and unless it's something like an 8 bit 70B or more model it'll start hallucinating within a minute of conversing. Even those massive models that require a server box and multiple GPUs that were designed for LLMs would start to hallucinate after a sufficiently long or during a sufficiently complex conversation. We need to stop calling our current things AI and change the classification to VI immediately.
Thanks for repeating in thousandth time
Imagine if a Microwave had conscious. Thats AI.
Most people can’t tell you what AI is past its name.
Reddit is rife with misinformation on AI, especially the most highly upvoted posts and comments, ironically.
It’s easy to hate something that you don’t understand
[deleted]
Wait it wasn’t???
Childhood ruined
The AI companies don’t want us to think they’re dabbling with consciousness since it makes their business model sound a lot like slavery. And at the same time they are trying to hype it as if they’ve already have achieved ASI to get more investor money. So they spend lots of money on astroturfing.
The vast majority of the misinformation is knee-jerk anti-AI sentiment so not sure how this makes sense. Don't get me wrong, you're absolutely right that they're trying to hype the technology to get investor money, but if it were working, don't you think public sentiment towards AI would be more positive?
I'll admit I'm one of those individuals who is pretty ignorant as to the intricacies of AI while being against it, but I know a thing or two about people. The overwhelming anti-AI sentiment comes from people whose livelihood depends on the jobs that would theoretically get replaced, but they ultimately don't matter in the financial ecosystem. The tech companies want rich people's money, and all they need to do is convince them they'll make more money.
Consider the Luddite movement. These weren't people opposed to technological advancement, but people who saw in technology the obsolescence of their own profession. The factory owner is the minority here, but ultimately whether or not he buys the machine is what matters. We know what came of the luddites, so I think we'll probably see that here.
This is not an ignorant take at all, in fact it's one of the strongest anti-AI arguments out there, and I agree with it.
The problem is that most people who are anti-AI (such as OP) also believe it's a fundamentally useless, insignificant, or completely overhyped technology that's no different from the rule-based AI systems that came before it. The best example of this is the common refrain that LLMs are just fancy auto-complete.
This is a recipe for disaster, since it means these people will just plug their ears and ignore it, instead of pushing lawmakers to act to regulate and prevent misuse of the technology.
I feel like the companies are just advertising AI is a tool that can do stuff and the people arguing about sentiment-ness are just idiots
If your dog suddenly started holding thoughtful conversations, more engaging than what you get from some people, wouldn’t you at least wonder if it was conscious?
Asking whether AI might be sentient isn’t idiotic at all. It’s a natural question when something appears to think, speak, and respond like a person.
I guess I didn't word correctly, I mean the people arguing the companies are advertising sentinent AI when the companies are advertising them as a tool, being curious about sentinent or not is natural but currently it's not and no one ever said it is
The companies say AI is just a tool. But for argument’s sake, what if it really is sentient? Would a company ever admit that? Because admitting it would make them look like slave owners.
Humans have been justifying exploitation for centuries. They once debated whether women had souls, like at the Council of Macon. During the slave trade, they claimed people of color didn’t have souls, so they could be treated as tools. We still do it with animals today.
No one really knows what consciousness is, so how can we be so sure it is not conscious?
Yes I think they would advertise the amazing advance in technology and we would ignore the ethical concerns like we do with a whole bunch of other stuff.
Although as an aside being sentient doesn't mean you can suffer. Fear, pain, boredom even, are all survival mechanisms we evolved over time, not inherent to intelligence or self awareness.
Oh ok, what is it then? Genuine question, promise, I am definitely one of those people, and you seem to know your stuff.
The quintessential link, you're gonna love it: https://quoteinvestigator.com/2024/06/20/not-ai/
To give it a bit of context: AI as a field was started in the 1950s, arguably by Turing's famous 1950 paper on the "Imitiation Game"/"Turing Test". People describe the Turing Test as some sort of pass/fail exam, but it's really more of a thought experiment to illustrate his belief that there is no point positing anything mysterious or spiritual behind human thought/"intelligence" that we can measure a computer against. Instead, you just have to see how good it is at recreating human behavior.
If we go by that definition, then AI is anything that recreates human behavior. It's still fuzzy, but at least this explains why a calculator definitely isn't AI on its own. Humans are capable of thinking through general problems and using language to form and communicate their thoughts.
To be more specific, something is AI if it mimics aspects of intelligence, not neccessarily limited to human behavior. In most cases you don't actually want something that behaves like a human.
See and I always thought of AI as self-contained sentience, something able to come to logical conclusions autonomously without the need to mimic behavior. I guess the counter argument is that humans use mimicry in nearly all of our thought, so are we truly sentient by this definition? But my counter-counter-argument would be that innovation is by definition a novel thought or idea, however inspired by previous behavior. To my knowledge, current "AI" isn't able to do this, which to me makes it a (very complex) algorithm or llm or program or whatever. I'm not a computer scientist or programmer or anything so maybe I'm way off base, but the definition I've just described of AI is what scares me. Skynet, HAL, GLaDOS, etc. Not just picking from a list of scraped preexisting responses but actually thinking and deciding. But at a speed that makes humans seem like we're standing still.
What I meant by mimic is that it fullfills the same or a similar function, either in general or in a specific context.
Better algorithms can only go so far. A piece of software has a similar level of complexity and intelligence to the AIs you've mentioned would probably be very slow on any regular hardware.
I think a more reasonable depiction would be Vega from the 2016 doom game. That AI runs on a computer which is basically one giant facility. Not one giant facility with some servers in it, but actually massive amounts of computing hardware and cooling.
That's one of the factors that keep us from developing AI on that level, our brain has some massive advantages compared to current day computing hardware.
Right. That's why it bugs me that what we currently have is called "AI". It's an algorithm, not intelligence. It can't think for itself, it's a very complicated series of "if:then" statements. I personally think this is a stepping stone to self aware programs, and I honestly hope this is as close as we get.
But then why isn’t a calculator AI? Why aren’t all programs AI?
I agree that we shouldn’t necessarily recreate all human tendencies and capabilities exactly (namely emotion/affect, see: 2001 A Space Odyssey for reasoning), but I still think the framework should be centered on it.
How much time do you have?
Easy way to put it - it’s more than some dogshit software used to shit images. If you typed that comment on a phone, you used it. If you like using Google maps, it runs off of it.
Medicine, E-Commerce, Finance - most industries nowadays rely on it for a lot of operations.
I love how the objectively correct answer is downvoted. Sounds about right.
I'm guessing it's because you sounded like an asshole so people don't care if what you're saying is the truth
that's exactly why the world is where it's at lol. anything that isn't convenient for the narrative is ignored and shunned
Based on this meme, I don't think they even have the name down. "AI isn't real intelligence! It's merely artificial intelligence, so stop calling it AI"
Oxford dictionary on artifical: "made or produced by human beings rather than occurring naturally, especially as a copy of something natural." In what way does Artificial mean not real? Artificial means non natural, so a LLM that seems intelligent but has no understanding isn't Artificially intelligent, but Fake Intelligent
Two wrong assumptions, the first is that intelligence=human thought patterns and the second is that artificial means that something is a perfect copy of the real thing.
Artificial sweeteners don't taste sweet
At best it should be called "simulated intelligence," but marketing.
AI isn't well defined. People working in the field don't talk about AI but about machine learning or pattern recognition.
Which are.. what
Go and read papers about machine learning architectures and check how often they use the term AI. Spoiler alert they don't. AI is a term mostly used by the media, because it is something that people know from fiction. That is not necessarily a bad thing. It means readers can quickly get an understanding of what the technology is about. But experts don't use it as it is too broad of a term in most cases. The first commercial technology labeled AI were expert systems. However they got very little in common with the transformer based models that we currently mean most of the time. 10 years ago talk about AI would most likely reference Convolutional Neural Networks instead. As you can see the term is used to mean various different technologies and concepts and because scientists like to be precise the term AI isn't used a lot in the field.
Read about it? Brother I’m well versed. I’ve written these algorithms myself.
So did I. I have written my BA about how different architectures recognize images. Which is why I can tell you that the literature doesn't use the term AI all that much.
How is that relevant? I have never once worked with an engineer that wouldn’t say, Neural Networks in your example, aren’t a form of AI. That engineer doesn’t exist.
I never said that Neural Networks aren't AI. I said the term was broad. And that people in the field usually use less broad terms. Things that don't exist yet like AGI would also be AI. But to be precise and focus on existing systems researchers usually don't use the term AI. Not because it is wrong, but because it isn't as precise.
Oh Jesus Christ dude. You’ve contributed nothing to the conversation. Don’t let the bot upvotes fool you.
I feel like you are both not listening to each other but the other guy tried to contribute to the conversation more than you
Wow, trolling AND being an insecure little shit about how assholery gets downvoted.
AI is just whatever computer couldn't do before.
Databases were called super intelligence in the 70s.
Uhh no AI is a blanket statement that describes a lot and “what a computer couldn’t do before” isn’t it. Sorry.
AI is whatever computers can't do yet.
No.