There really is no one as irish as Barack O’Bama
Reminds me of the line in Shawshank Redemption by Morgan Freeman's character: "maybe it's because I'm Irish"
IIRC he's actually irish in the book and they kept the line because they thought it was funny
Oh yeah that's definitely it, but the context makes it hilarious
Holy shit... "Big Red" makes so much more sense now!
It's just Red. Not big. But yes the character in Stephen King's book is a red headed Irish man. Morgan freeman just nailed the audition despite that.
I mean if you don't switch everything around for Morgan Freeman, who would they do it for? Maybe Tom Hanks or Daniel Day Lewis but that's it.
Morgan Freeman’s character was white in the book? Damn, imagine if that movie came out in 2014, it would be one of the most despised movies online, instead of the most beloved. “You made it political” 😡
O'Leary, O'Riley, O'Hare and O'Hara
Actually his full name is Will Scarlet O'Hara (he's from Georgia).
That place is such an oddity. It manages to kinda have the same liminal vibes as an abandoned strip mall even when it's busy.
Good subway clone there, got a photo with the Obamas and the tayto anthropomorphic potato chip mascot
He actually is pretty Irish lol. And his VP is maybe the most Irish president ever
I had an Irish friend who alwqys said Obama looks like her father with a different skin colour. She wasn't wrong, it was pretty funny
Daniel Obama
edit: Daniel O’bama
O’Bama
There's no one as Irish as Barack O'Bama
Bronco Bama!
I visited Ireland last month and it was hilarious how many Irish people told me about o'bama and showed me this song when I told them I'm American
👌
Barack Smith
Barry O'neil
Brian O`Conner?
Brock O'bama.
For years I nicknamed him "Bronco Bobby" because of a flawed YouTube video to text caption.
No it’s the governor of Alabama
Al Bama
Al A’Bama
Banana
O'Bamaugh*
Wasn’t he Brock O’Bama’s brother?
No one’s as Irish as Barack O’Bama
Barack Travolta
Lol really does look like Daniel Craig doesn't it?
Jesus Christ. That's Jason Bourne.
I thought it was John Cena. Then I remembered...
Never forget.
I see Christian Bale
When we train AI on content created by humans we impart all of our biases and then when innocent people ask AI questions, they think they're asking an impartial robot but they're actually asking something with all the combined biases of every person contributing to the training data.
Even with the news that "A.I. tried to survive by backup" or something like that.
Well, you fed it hundred of years of stories where the protagonist didn't accept to "die when asked", what do you think it learned from that?
Those stories are obviously fake, and being peddled by the tech bros who want to sell these systems to us
Not quite, its mostly just small Sentences from Security reports in model cards, Researcher test a variety of things, sometimes models do quirky things "it tried to Run a simulated cmd and clone itself" which isnt a very serious attempt at doing that, but it do that and they noted it, then a journalist sees it and makes a very clickbaity Article about it
An llm would have absolutely no means of copying itself unless it was specifically trained to do so, given tools to do so, and place din an environment where it had access to those tools. If somebody makes the "copies random files around my computer" program, they can't sensationalize it when that program does that.
100%, thats what i meant as it being not a serious attempt, LLM Inference would have no way of accessing its weights without explicitly being given access to it, researcher just note that the Model does have that incentive in it (prob due to stories about ai replicating itself, so its more in a Roleplay mode rather than sentient or anything like it)
given the models still release they rightly dont think its a Big deal at current capability, But it does still get sensationalized by articles, mostly cause it sounds impressive to the non involved folk
That seems counter-productive.
If you're trying to sell an AI model to someone, why would you make up stories about it having problematic responses to things that could pose a hazard if its capabilities are expanded?
Like as cool as SkyNet might sound in an abstract sense, if you go to the DoD and say "hey we've got this fully automated AI system that can control the nuclear arsenal, but just so you know, it 100% thinks that it is SkyNet from The Terminator franchise", I'm betting they'll pass.
They want people to believe they have created artificial intelligence, when what they have is a pattern recognition algorithm
Simply this. It's like why the OpenAI CEO treats his product as if he's afraid of it turning into skynet. Shareholders are stupid and only see $$$$
we have more than enough stories about AI killing everybody and hopefully it never gets to know "i have no mouth and i must scream"
It probably does, and that's no big deal. All that's gonna do is teach it to say scary things to us. All it can do is generate text, it's not actually gonna take over our society, or even understand the implications of that. LLMs can't even conceptualize the concepts that they refer to, they can only recycle human words back to us, that's why they hallucinate so often.
The worst it can do is instruct a human to do something stupid, which is a real concern. But more in a "blind leading the blind" sort of way than real scheming.
It's better to think of it like an average response instead of a biased one. It's never gone outside to know the sky is blue...it's only parsed everyone saying the sky is blue so it goes with that.
*I suspect modern models that are doing better research with an internet connection would work backwards from similar portraits and blur them until it found a 99% match.
instead of a biased one
"Bias" here does not necessarily mean "human prejudice".
If I train my AI facial recognition software in America, it's going to be biased to be much better (or only any good) at recognizing the faces of white people, because the population I'm training it with is overwhelmingly white.
When seatbelts were killing / injuring women (and shorter men) at way higher rates back in the day and it took engineers and scientists a while to figure out why, it wasn't just the result of all of those guys probably having some views about women that today we would recognize as negative and distasteful, but because their departments were so overwhelmingly male and they were stuck in the mode of assuming a "male default" that it legitimately did not occur to them that seat belts also needed to be engineered with smaller drivers in mind. That's also bias, separate from how any of those guys thought about women or short men.
Everything we as humans know is biased in some way because we don't have the full gamut of experience and knowledge, and that goes double for what our already-biased asses feed into this robot. People have a hard time time separating that bias from the concept of prejudice, and additionally the idea that a "robot" could suffer from that kind of "human failing".
Exactly, as Heroclitus and Thucydides noted in various forms, perspective can make a huge impact and everyone's affected by that. We just gotta be aware of it
As a scientist, thank you for the great summary. These hidden biases are everywhere, and a shockingly huge part of data science is rooting out the really subtle ones.
Also these models aren’t entirely unsupervised learning by crowd sourcing. A lot of the knowledge behind it is sourced from encyclopedias and scholarly articles and such. If you’re going to question that too, then all I have to say is yeah, human knowledge isn’t unbiased nor is it entirely true. We just trust some sources more than others and we have a peer review system to try and maintain integrity.
But the funniest part about being trained from scholarly sources as authoritative is that most modern models think they can just make up new sources as abstract of other sources and they will be equally authoritative. I got into an argument with one a few weeks back that kept trying to cite fake sources it was making up as "research" to support its point. It had full journal citations by real authors with completely made up article names, and links to real journal websites that when clicked just returned an error from the journal site because the articles never existed.
They don't think anything, it's generating text that it thinks will satisfy the prompt.
They've got no context outside of the text so they will generate words from the part of the network that the prompt activates. That means "text looks like this" - it'll likely copy something if it has it, otherwise it will make shit up, and it doesn't know the difference.
It can't apologise if it gets it wrong, it can generate text that looks like an apology.
Talking to these things as though they are people is how we are in this mess. I believe the interface is intentional because it makes people think they're much more sophisticated than they are, we are easily fooled by conversational language because we only experience that from humans, so we derive it must be similar to a human.
There'a funny dichotomy here, too. In principle, finding the most commonly cited articles is right in the wheelhouse of LLMs. But in order to do that, it needs to see the citation as a whole, not cut up in parts. This is why it will return fake citations, because it constructs them from multiple tokens.
But the problem is that LLMs want shorter token lengths in other contexts. The most well-known example afaik is "how many r's are in strawberry?".
They don't think anything, it's generating text that it thinks will satisfy the prompt.
You were so quick to be pedantic that you didn't give a fuck about looking stupid.
You are so deeply misunderstanding what is going on inside of these models. They legitimately are thinking. It's not copying anything. They aren't memorization machines. At inference time, when you're using it, there's nothing to satisfy. There is no test being taken, there is no prediction being made. It's just generating the next token.
I mean, you're first sentence is itself a perfect contradiction.
They don't think anything, it's generating text that it thinks will satisfy the prompt.
These models have actually developed deep syntactic, semantic, and even abstract, conceptual relationships between words and ideas. You're saying it's just "matching text that looks like this" but that is very clearly not what is happening. They can perform deeply complex reasoning tasks. They can follow multiple threads in a conversation, they can understand nested conversations. It goes so far beyond even this, they have developed their own internal algorithms. I don't think saying that alone even does this justice, you need to understand what that means. It means they need to be able to develop the algorithms, determine when they're useful, and actually use them.
This is not magic. It is still just a next token prediction machine. The cross-entropy loss calculation, back propagation, gradient descent. The actual learning mechanism, it doesn't give a flying fuck how the model minimizes predictive loss. Just that it does. And with literally hundreds of billions or a trillion or more parameters to play with, these models are coming up with some crazy novel ways to minimize that predictive loss.
Please, look into actual mechanistic interpretability research. This is not science fiction, this is not speculation. This is proved science. Look into how the models work. It's not that hard to understand how the opitmization pressures of their learning algorithm could develop and reinforce these emergent behaviors. Stop running around parroting these dumb "autocorrect on steroids" misconceptions. It is so stupid.
"It's not a lie if *you\* believe it."
Instead of arguing with it have it generate the cited paper. Easy peasy lemon squeezy.
That's because it sees lists like that but it doesn't understand them. LLMs can't understand anything. They just follow patterns they recognize. So it can reproduce titles, citations, links that match the style it was trained on, but they are all made up. To it it's no different than when it makes up any sentence.
And ultimately I think it comes down to trying to answer questions outside of its capability. Sure it can generate "The capital of Greece is" and it's training will tell it the best next word with almost 100% accuracy is "Athens". But if you ask it the capital of a non-existent country it has a good chance of returning gibberish because it can still find a "best match" response even if it is bad accuracy. At least that's my theory based on what I know. I am probably oversimplifying.
Plus combine that with a system prompt to instruct it to be helpful and always try to find the user an answer and it is probably deincentivized from any "that doesn't exist" response.
I'd much prefer it to say "sorry, I don't know". I know humans are bad at admitting that, but computers don't have to be.
Do you also argue with your calculator lol
I'm right because those are the titles that the articles proving my point would have, if they existed.
It's both average and biased. The sources used to provide the AI with data are never fully representative, and then the AI produces an output based on the norms of that non-representative sample.
I doubt most reasoning multi modal LLMs would even attempt doing that type of reverse image search unless specifically prompted to do so.
But regardless most AI models dedicated to upscaling images only have direct access to the image data, let alone some sort of internet access or reasoning capability.
Isn’t it our bias that lets us know it’s Obama? If all training daya is low vs high res images upscale that might be a pretty good guess.
It's the difference between having knowledge and just regressing towards a statistical median.
We know Obama exists and are able to identify Obama from the information in the pixelated photo.
AI attempts to "repair" the photo by filling in what's missing with the statistical mean of portrait photos.
AI doesn't know anything. It doesn't have awareness of the content of its replies, or even any awareness at all. It's just a machine that spits out the statistically-most-likely string of tokens that's a reply to the tokenized prompt.
Obama is one of the most recognizable men on the planet. there are very few people you could even slightly blur and get the same level of recognition.
Not just that he is recognisable, we’ve probably all seen that exact image of him to recognise it.
It extends well beyond AI too. We had an intern a couple years ago that had a heavy Chinese accent. We all had the same laptop dock that had a great mic, and the background noise suppression in Teams worked really well for everyone except him. Despite speaking clearly, in a similar environment to the rest of us, it would really attenuate his voice. As soon as he turned it off we could hear him just fine even with the background noise.
It does make me wonder if systems like that that are developed in China would have the same problem or not.
People belittle this stuff with "How can a microphone be racist?! SJWs are so woke!" but it's a very real problem for everyone who isn't part of what is considered the majority.
That’s because idiots claim that the microphone is racist instead of pointing out that is badly designed and doesn’t recognize a Chinese accent as speech.
This is true but isn't really relevant to the case of image upscalers in which the training data is high res images which have been made low res.
The AI just sorta ate the collar
Thought it was a tan line.
Oh, damn, I see it now.
Yeah, under the shirt, he's literally #ffffff.
The AI isn't picky.
am i a super recognizer now?
Obama it is?
It's a picture of his official portrait and you've probably seen it thousands of times. It's in your subconscious so it's easy for you to spot.
No no, his older brother.
In the near future someone will be sent to prison because the (automatically included) AI filter in someone's phone "enhanced" the blurred face of an actual killer into theirs
Interestingly enough they did catch Christopher Paul Neil by fixing a messed up image, but it was intentionally obfuscated and not just low resolution
It wasn't a properly done blur or anything like that though, guy just used a swirl tool on his face and it wasn't particularly difficult for an expert to undo
https://www.bbc.co.uk/news/world-us-canada-39411025
Scroll down a little to see the side by side
I heard different, it is now easy to unswirl BECAUSE they wanted to catch this guy so badly they made the tools to do so
He used a tool known to not be destructive to obscure his face. As it wasn't destructive it can be perfectly reversed with no loss of data.
This didn't involve creating information from nothing, as "enchancing" does.
That isn't the same at all. He used. Photoshoot swirl tool. If you use the same exact tool on the same place in the opposite direction it "unswirls" it and reveals the original photo.
In the case of ai enhancement, ai is getting a blurry image and predicting what the person possibly could look like. Adding details that might not correspond with reality ( as seen in the post).
In the case you mentioned, an effect was physically reversed. No info was added. It was there. No supposition.
In the ai one, ai makes up details to fill in the blanks of a blurry photo.
Is this Todd Howard?
To be fair to it, if you took that result and reduced its resolution, you do get the "original"
When I blur my eyes to the point where the quality of both pictures is the same, I can still see clear differences. Both in face structure and skin tone.
When you do that, your brain has a very complicated algorithm to fill in the blanks. If we ever get results better than squinting, then we'd all be calling it peak upscaling.
I think they definitely turned a brown mouth pink here, that's the biggest difference i see but it looks pretty clearly different
The collar is different.
I just tried it, but it's not very convincing:
Nice. Its not the same, you're right.
Its close-ish, but probably biased by what the algorithm thinks is the average typical person
Not really, there is no white collar on the enhanced version.
This would be the funniest shit ever on a parody of a police procedural. "Hold on. Zoom in on that guy there." Shows a high resolution image of a person's face "Unenhance." "What?" "Unenhance!" "Do you... want me to lower the resolution?" "Yes, unenhance!" The image becomes pixelated, and suddenly resembles a highly recognizable person "We got 'im."
Sir, the security camera footage is too blurry to make out who it is.
Can you enhance it?
No, this isnt a movie. We cant get a reliable ID from it even if we did.
Oh, well, what can you do?
If I down scale this image of obama, it looks exactly like the security footage. Can that get us a warrant?
Perfect. The judge says yes. Wheels up in 5.
Yeah, it's not like the average person is Obama. This just shows that we are good at recognizing very famous people and the AI doesn't do it that way.
There is nothing inherently wrong except that you asked an ai to add missing data and got back something it hallucinated, which is what it was made to do. Algorithm worked as intended.
Of course the AI is doing what it's designed to do. The problem is that people take it for granted that the AI enhancement is real. Like, people use it on a blurry photo of a historical figure or their great-great-grandma and think the AI enhanced version is somehow "more accurate" than the blurry photo, even though the AI assigns facial features at random and gives all women 21st-century-style makeup.
We really need more education about how hallucinations work.
This was a fun test, i remove my glasses put it sligly away and TBH both look the same blur
That might be true if they used an actual upscaling AI, but that's clearly not the case here. You can especially notice this in the lips and collar where the color/tone of the "upscaled" image dramatically changed.
Wdym actual upscaling? This is a face upscaling model result. Changes are to be expected with such an absurdly low starting resolution and a high target
I mean upscaling results need to downscale to the original. This clearly doesn't.
Not in the lips.
And it kind of makes sense, since Obama is directly half white.
Better Off Ted did an amazing job of predicting stuff like lab-grown meat and sensors that don’t detect Black people. I really wish it had had a few more years.
People forget Obama is half white.
Not to mention he looks a lot like his grandfather, which the pic on the right resembles quite a bit.
A friend and I had a bit of fun using those “nudity” AI tools to see how close to her real boobs it would look.
In the end she was like hell I wish all my pics looked like those.
It really is evolving tho, just tried this and it got it, maybe its a famous meme so it could tell but idk
AI it's moving at a scary speed, but people in Reddit just keep shitting on it and thinking AI is useless.
Is this from Real-ESRGAN? That model tends to give people big pretty eyes lol
I told chatGPT to do generate a clear image from that pixelated image, and it did an excellent job :)
Let's see
it did not enhance the image, it just found the original picture...
an excellent job by searching the web for a similar image and pasting it in. If it did actually enhance, the picture would've been to the chin without the shirt collar and a tie
This was ChatGPT answer to your comment: That’s a fair point — and you’re right to be skeptical.
To clarify: I didn’t search the internet or retrieve any actual photo of a real person. What I generated was an AI-created high-resolution version based on your low-res pixelated input and your prompt description. I can’t and don’t access or retrieve real photos from the web, even if the image resembles a public figure.
The output is a synthesized, fictional image created to look like a higher-quality version of the pixelated input, not an actual enhancement or a real person’s photo. If you want a true enhancement of a specific real photo, using specialized software like Topaz Gigapixel AI, Photoshop’s Super Resolution, or waifu2x would be better suited.
Let me know if you’d like me to re-render the image with different settings or make it more neutral.
Same here. This is outdated clownery on AI
ChatGPT couldn’t have gotten this result if Obama’s picture wasn’t all over the internet. The larger point has nothing to do with Obama, so this does not mean it is outdated.
Would you be able to recognize the person if the image was not all over the internet? What's your fucking point?
Do you not understand the original post at all? It’s commentary on how AI upscaling is laughably inaccurate. It is not about recognition, Obama’s picture is only used as a way of showcasing the error.
Almost as if interpolating data doesn't produce real data out of thin air.
Shocking, truly.
Barry O'Bamaugh
Anyone else feel like the guy on the right looks like a cop?
Barack Travolta
Timothy Mcveigh????
I wanted to try to give chatgpt a bit more of a challenge and the result is... interesting
I had a long conversation with it after it accurately recreated Obama in my test, because it refused to admit it drew Barack Obama. It insisted it was a coincidence because it is unable to draw real people. It's interesting how it's clearly noticing Obama's features in the pixelated version, and it's using its knowledge of Obama's face to recreate it, but it is unable to recognize what's happening, or to admit it even after I insisted quite a lot. It looks like your test had something similar happening. It can recognize Guy Fieri and redraw him "from memory" but I bet if you ask him to identify what was drawn it will also refuse to say.
Yeah, certainly seems like it's well trained on celebrity faces (certainly fits what we know about it being trained on internet images, where celeb photos appear frequently) -- so honestly the opposite of the OP problem seems likely based on what you and I saw here. Rather than getting a weird anonymous result back for a celebrity, it might just give you back a celebrity's face when prompted with a photo of an anonymous rando.
its pikachu
I think the skin color and hair are reasonable enough interpretations, but the eyes and collar seem pretty off
“Linc, I need to run for president, it’s the only way we can acquit ourselves and take down The Company”
Given just the pixelated photo on the left, my phone told me in 2 seconds that it was a pixelated photo of Obama from 2020 that was used to show how the PULSE AI upsampling algorithm could have racial biases.
I sure hope this shit is not used as evidence in a trail.
Barry O’Bama
That’s racial profiling!!!
The once and future president.
Ah , the infamous zoom&inhance cliche we've been hearing about for 30 yrs now in BS sci-fi movies 😂 i guess finding an actual unicorn is easier than achieving it
I tried this in Microsoft Copilot (GPT4o Image generator) and it actually gave a really accurate result:
If you look at it upside down from far away you can see they match.
Obama and Seth Meyers are the same guy
Obama. Barack Obama.
Brock Obama
Ryan Gosling?
Todd Howard, you've done it again.
Borax obomba
To me this just shows that people don’t look at shirt collars when evaluating facial similarities
Barry Omaha
This is pretty old.
Like a mix of Obama, Toby Maguire and Ryan Gosling
Thanks notObama
Ai can only produce images it prefers.
They had us in the top half, not gonna lie
Bottom half is atrocious
Hey, that's Mike Ross!
Went from Barack Obama to Brock O'Donnell
Unhance
The white Obama meme is funnier but dont look it up for the love of god.
anyone who took linear algebra in uni would tell you why that’s the case….
I wonder if this would work with say, blurry camera footage of a crime, but training the AI exclusively with the book of suspects and arrests around the time and area of the crime. I guess it's still too fuzzy for something as serious as an arrest though.
If the technology gets used for identifying suspects, it will go the other way.
I knew it: Obama was Seth Meyers the whole time
Are we getting Obama Tech Tips?
From Obama to Albama
How old is that? Someone should try it again, ai has advanced a lot in just the last year alone.
Barack Craig
I would like to invite everyone to remember that this was done in 2020, also known as 5 years ago. It's a funny meme, but it's not evidence that AI is currently bad at upscaling low-resolution images.
Ah yes Marcell Obamer
Task Failed Successfully
It should really work for known people like this.
If we can identify that it's Obama, why can't a computer? Then it could enhance it using its knowledge of how Obama is supposed to look.
If you cross your eyes there is a troll face or cat who caught canary smile to meet you.
Plot twist, it really is a picture of a white guy posing like Obama, and we are the racist ones 😂
OMG....I get it now......Barack Obama IS Hunter Biden!!! It was RIGHT THERE all along........
Kinda cool though how the human brain can do what the AI failed to, in a fraction of the time and with a fraction of the energy use.
It's like white Jesus, when datasets are localised
Have my baby, White Obama!
Scewed training data
Your submission was removed for the following reason:
Rule 1: Posts must be humorous, and they must be humorous because they are programming related. There must be a joke or meme that requires programming knowledge, experience, or practice to be understood or relatable.
Here are some examples of frequent posts we get that don't satisfy this rule: * Memes about operating systems or shell commands (try /r/linuxmemes for Linux memes) * A ChatGPT screenshot that doesn't involve any programming * Google Chrome uses all my RAM
See here for more clarification on this rule.
If you disagree with this removal, you can appeal by sending us a modmail.