ProgrammerHumor

enhanceButMakeItWrong

enhanceButMakeItWrong
https://i.redd.it/v6xvoxqt0hef1.jpeg
Reddit

Discussion

ProgrammerHumor-ModTeam
:ath:

Your submission was removed for the following reason:

Rule 1: Posts must be humorous, and they must be humorous because they are programming related. There must be a joke or meme that requires programming knowledge, experience, or practice to be understood or relatable.

Here are some examples of frequent posts we get that don't satisfy this rule: * Memes about operating systems or shell commands (try /r/linuxmemes for Linux memes) * A ChatGPT screenshot that doesn't involve any programming * Google Chrome uses all my RAM

See here for more clarification on this rule.

If you disagree with this removal, you can appeal by sending us a modmail.

13 hours ago
Prudent-Employee-334

There really is no one as irish as Barack O’Bama

20 hours ago
DatBoi_BP
:rust::cp::rust::py::rust::m:

Reminds me of the line in Shawshank Redemption by Morgan Freeman's character: "maybe it's because I'm Irish"

19 hours ago
jeffderek

IIRC he's actually irish in the book and they kept the line because they thought it was funny

19 hours ago
DatBoi_BP
:rust::cp::rust::py::rust::m:

Oh yeah that's definitely it, but the context makes it hilarious

19 hours ago
Praesentius

Holy shit... "Big Red" makes so much more sense now!

17 hours ago
Mist_Rising

It's just Red. Not big. But yes the character in Stephen King's book is a red headed Irish man. Morgan freeman just nailed the audition despite that.

15 hours ago
jamesfordsawyer

I mean if you don't switch everything around for Morgan Freeman, who would they do it for? Maybe Tom Hanks or Daniel Day Lewis but that's it.

14 hours ago
lateformyfuneral

Morgan Freeman’s character was white in the book? Damn, imagine if that movie came out in 2014, it would be one of the most despised movies online, instead of the most beloved. “You made it political” 😡

15 hours ago
Word1_Word2_4Numbers

"woke! woke! woke!"

https://i.imgflip.com/50lm09.png

14 hours ago
VikRiggs

O'Leary, O'Riley, O'Hare and O'Hara

18 hours ago
_toodamnparanoid_
:asm:

Actually his full name is Will Scarlet O'Hara (he's from Georgia).

15 hours ago
__-_-_--_--_-_---___

https://en.wikipedia.org/wiki/Barack_Obama_Plaza

18 hours ago
as_it_was_written

That place is such an oddity. It manages to kinda have the same liminal vibes as an abandoned strip mall even when it's busy.

17 hours ago
Winter_Sacrfice_6969

Good subway clone there, got a photo with the Obamas and the tayto anthropomorphic potato chip mascot

16 hours ago
russellzerotohero

He actually is pretty Irish lol. And his VP is maybe the most Irish president ever

17 hours ago
Takopantsu

I had an Irish friend who alwqys said Obama looks like her father with a different skin colour. She wasn't wrong, it was pretty funny

16 hours ago
ITaggie
:py::powershell::cp::bash::java:

The most famous Irish-American...

18 hours ago
prumf
:rust::g::ts:

Daniel Obama

edit: Daniel O’bama

20 hours ago
Percolator2020
:ftn::unreal::c::kos:

O’Bama

20 hours ago
saera-targaryen

There's no one as Irish as Barack O'Bama 

https://youtu.be/4Xkw8ip43Vk?si=Y0leIEtqzJmrp8Y9

18 hours ago
MrDaVernacular

Bronco Bama!

18 hours ago
Sunbrojesus

I visited Ireland last month and it was hilarious how many Irish people told me about o'bama and showed me this song when I told them I'm American 

17 hours ago
prumf
:rust::g::ts:

👌

20 hours ago
dirtier_earth OP

Barack Smith

20 hours ago
Adghar

Barry O'neil

20 hours ago
holchansg

Brian O`Conner?

19 hours ago
piezombi3

Brock O'bama.

18 hours ago
ccricers

For years I nicknamed him "Bronco Bobby" because of a flawed YouTube video to text caption.

19 hours ago
lethargy86

No it’s the governor of Alabama

Al Bama

20 hours ago
Torocatala

Al A’Bama

16 hours ago
Blue_Robin_Gaming
:py::ts::gd:

Banana

19 hours ago
8lb6ozBabyJsus
:py::gd::msl:

19 hours ago
4daughters

O'Bamaugh*

18 hours ago
lightwhite

Wasn’t he Brock O’Bama’s brother?

19 hours ago
TheAceBoi

No one’s as Irish as Barack O’Bama

18 hours ago
Thin_Cable4155

Barack Travolta

18 hours ago
Xvexe

Lol really does look like Daniel Craig doesn't it?

16 hours ago
lllorrr

Jesus Christ. That's Jason Bourne.

19 hours ago
userhwon

I thought it was John Cena. Then I remembered...

18 hours ago
misterfistyersister

Never forget.

15 hours ago
SistaChans

I see Christian Bale

17 hours ago
binterryan76

When we train AI on content created by humans we impart all of our biases and then when innocent people ask AI questions, they think they're asking an impartial robot but they're actually asking something with all the combined biases of every person contributing to the training data.

20 hours ago
xrayden
:p:

Even with the news that "A.I. tried to survive by backup" or something like that.

Well, you fed it hundred of years of stories where the protagonist didn't accept to "die when asked", what do you think it learned from that?

19 hours ago
InnuendoBot5001

Those stories are obviously fake, and being peddled by the tech bros who want to sell these systems to us

19 hours ago
Neon9987

Not quite, its mostly just small Sentences from Security reports in model cards, Researcher test a variety of things, sometimes models do quirky things "it tried to Run a simulated cmd and clone itself" which isnt a very serious attempt at doing that, but it do that and they noted it, then a journalist sees it and makes a very clickbaity Article about it

18 hours ago
InnuendoBot5001

An llm would have absolutely no means of copying itself unless it was specifically trained to do so, given tools to do so, and place din an environment where it had access to those tools. If somebody makes the "copies random files around my computer" program, they can't sensationalize it when that program does that.

18 hours ago
Neon9987

100%, thats what i meant as it being not a serious attempt, LLM Inference would have no way of accessing its weights without explicitly being given access to it, researcher just note that the Model does have that incentive in it (prob due to stories about ai replicating itself, so its more in a Roleplay mode rather than sentient or anything like it)
given the models still release they rightly dont think its a Big deal at current capability, But it does still get sensationalized by articles, mostly cause it sounds impressive to the non involved folk

18 hours ago
red286

That seems counter-productive.

If you're trying to sell an AI model to someone, why would you make up stories about it having problematic responses to things that could pose a hazard if its capabilities are expanded?

Like as cool as SkyNet might sound in an abstract sense, if you go to the DoD and say "hey we've got this fully automated AI system that can control the nuclear arsenal, but just so you know, it 100% thinks that it is SkyNet from The Terminator franchise", I'm betting they'll pass.

15 hours ago
InnuendoBot5001

They want people to believe they have created artificial intelligence, when what they have is a pattern recognition algorithm

15 hours ago
SartenSinAceite

Simply this. It's like why the OpenAI CEO treats his product as if he's afraid of it turning into skynet. Shareholders are stupid and only see $$$$

13 hours ago
akoOfIxtall
:cs::ts::c:

we have more than enough stories about AI killing everybody and hopefully it never gets to know "i have no mouth and i must scream"

18 hours ago
TwilightVulpine

It probably does, and that's no big deal. All that's gonna do is teach it to say scary things to us. All it can do is generate text, it's not actually gonna take over our society, or even understand the implications of that. LLMs can't even conceptualize the concepts that they refer to, they can only recycle human words back to us, that's why they hallucinate so often.

The worst it can do is instruct a human to do something stupid, which is a real concern. But more in a "blind leading the blind" sort of way than real scheming.

17 hours ago
anonymousbopper767

It's better to think of it like an average response instead of a biased one. It's never gone outside to know the sky is blue...it's only parsed everyone saying the sky is blue so it goes with that.

*I suspect modern models that are doing better research with an internet connection would work backwards from similar portraits and blur them until it found a 99% match.

19 hours ago
gorgewall

instead of a biased one

"Bias" here does not necessarily mean "human prejudice".

If I train my AI facial recognition software in America, it's going to be biased to be much better (or only any good) at recognizing the faces of white people, because the population I'm training it with is overwhelmingly white.

When seatbelts were killing / injuring women (and shorter men) at way higher rates back in the day and it took engineers and scientists a while to figure out why, it wasn't just the result of all of those guys probably having some views about women that today we would recognize as negative and distasteful, but because their departments were so overwhelmingly male and they were stuck in the mode of assuming a "male default" that it legitimately did not occur to them that seat belts also needed to be engineered with smaller drivers in mind. That's also bias, separate from how any of those guys thought about women or short men.

Everything we as humans know is biased in some way because we don't have the full gamut of experience and knowledge, and that goes double for what our already-biased asses feed into this robot. People have a hard time time separating that bias from the concept of prejudice, and additionally the idea that a "robot" could suffer from that kind of "human failing".

17 hours ago
Techhead7890

Exactly, as Heroclitus and Thucydides noted in various forms, perspective can make a huge impact and everyone's affected by that. We just gotta be aware of it

16 hours ago
Lebowquade

As a scientist, thank you for the great summary. These hidden biases are everywhere, and a shockingly huge part of data science is rooting out the really subtle ones.

14 hours ago
probablyNotARSNBot

Also these models aren’t entirely unsupervised learning by crowd sourcing. A lot of the knowledge behind it is sourced from encyclopedias and scholarly articles and such. If you’re going to question that too, then all I have to say is yeah, human knowledge isn’t unbiased nor is it entirely true. We just trust some sources more than others and we have a peer review system to try and maintain integrity.

18 hours ago
ThatOneCSL

https://xkcd.com/3117/

18 hours ago
ut-fan-i-cant-read

But the funniest part about being trained from scholarly sources as authoritative is that most modern models think they can just make up new sources as abstract of other sources and they will be equally authoritative. I got into an argument with one a few weeks back that kept trying to cite fake sources it was making up as "research" to support its point. It had full journal citations by real authors with completely made up article names, and links to real journal websites that when clicked just returned an error from the journal site because the articles never existed.

18 hours ago
bnej

They don't think anything, it's generating text that it thinks will satisfy the prompt.

They've got no context outside of the text so they will generate words from the part of the network that the prompt activates. That means "text looks like this" - it'll likely copy something if it has it, otherwise it will make shit up, and it doesn't know the difference.

It can't apologise if it gets it wrong, it can generate text that looks like an apology.

Talking to these things as though they are people is how we are in this mess. I believe the interface is intentional because it makes people think they're much more sophisticated than they are, we are easily fooled by conversational language because we only experience that from humans, so we derive it must be similar to a human.

17 hours ago
FNLN_taken

There'a funny dichotomy here, too. In principle, finding the most commonly cited articles is right in the wheelhouse of LLMs. But in order to do that, it needs to see the citation as a whole, not cut up in parts. This is why it will return fake citations, because it constructs them from multiple tokens.

But the problem is that LLMs want shorter token lengths in other contexts. The most well-known example afaik is "how many r's are in strawberry?".

16 hours ago
ReckoningGotham

They don't think anything, it's generating text that it thinks will satisfy the prompt.

You were so quick to be pedantic that you didn't give a fuck about looking stupid.

15 hours ago
neanderthology

You are so deeply misunderstanding what is going on inside of these models. They legitimately are thinking. It's not copying anything. They aren't memorization machines. At inference time, when you're using it, there's nothing to satisfy. There is no test being taken, there is no prediction being made. It's just generating the next token.

I mean, you're first sentence is itself a perfect contradiction.

They don't think anything, it's generating text that it thinks will satisfy the prompt.

These models have actually developed deep syntactic, semantic, and even abstract, conceptual relationships between words and ideas. You're saying it's just "matching text that looks like this" but that is very clearly not what is happening. They can perform deeply complex reasoning tasks. They can follow multiple threads in a conversation, they can understand nested conversations. It goes so far beyond even this, they have developed their own internal algorithms. I don't think saying that alone even does this justice, you need to understand what that means. It means they need to be able to develop the algorithms, determine when they're useful, and actually use them.

This is not magic. It is still just a next token prediction machine. The cross-entropy loss calculation, back propagation, gradient descent. The actual learning mechanism, it doesn't give a flying fuck how the model minimizes predictive loss. Just that it does. And with literally hundreds of billions or a trillion or more parameters to play with, these models are coming up with some crazy novel ways to minimize that predictive loss.

Please, look into actual mechanistic interpretability research. This is not science fiction, this is not speculation. This is proved science. Look into how the models work. It's not that hard to understand how the opitmization pressures of their learning algorithm could develop and reinforce these emergent behaviors. Stop running around parroting these dumb "autocorrect on steroids" misconceptions. It is so stupid.

15 hours ago
LowerEntropy

"It's not a lie if *you\* believe it."

17 hours ago
Outrageous-Wait-8895

Instead of arguing with it have it generate the cited paper. Easy peasy lemon squeezy.

17 hours ago
The_MAZZTer
:cs:

That's because it sees lists like that but it doesn't understand them. LLMs can't understand anything. They just follow patterns they recognize. So it can reproduce titles, citations, links that match the style it was trained on, but they are all made up. To it it's no different than when it makes up any sentence.

And ultimately I think it comes down to trying to answer questions outside of its capability. Sure it can generate "The capital of Greece is" and it's training will tell it the best next word with almost 100% accuracy is "Athens". But if you ask it the capital of a non-existent country it has a good chance of returning gibberish because it can still find a "best match" response even if it is bad accuracy. At least that's my theory based on what I know. I am probably oversimplifying.

Plus combine that with a system prompt to instruct it to be helpful and always try to find the user an answer and it is probably deincentivized from any "that doesn't exist" response.

17 hours ago
ForensicPathology

I'd much prefer it to say "sorry, I don't know".  I know humans are bad at admitting that, but computers don't have to be.

13 hours ago
confusedkarnatia

Do you also argue with your calculator lol

16 hours ago
WazWaz
:cp: :cs:

I'm right because those are the titles that the articles proving my point would have, if they existed.

17 hours ago
redlaWw

It's both average and biased. The sources used to provide the AI with data are never fully representative, and then the AI produces an output based on the norms of that non-representative sample.

17 hours ago
Systematic-Error
:s:

I doubt most reasoning multi modal LLMs would even attempt doing that type of reverse image search unless specifically prompted to do so.

But regardless most AI models dedicated to upscaling images only have direct access to the image data, let alone some sort of internet access or reasoning capability.

16 hours ago
CommonRequirement

Isn’t it our bias that lets us know it’s Obama? If all training daya is low vs high res images upscale that might be a pretty good guess.

18 hours ago
Gingevere

It's the difference between having knowledge and just regressing towards a statistical median.

We know Obama exists and are able to identify Obama from the information in the pixelated photo.

AI attempts to "repair" the photo by filling in what's missing with the statistical mean of portrait photos.

AI doesn't know anything. It doesn't have awareness of the content of its replies, or even any awareness at all. It's just a machine that spits out the statistically-most-likely string of tokens that's a reply to the tokenized prompt.

17 hours ago
mrmastermimi

Obama is one of the most recognizable men on the planet. there are very few people you could even slightly blur and get the same level of recognition.

18 hours ago
mrgonzalez

Not just that he is recognisable, we’ve probably all seen that exact image of him to recognise it.

16 hours ago
kookyabird
:cs::ts::js:

It extends well beyond AI too. We had an intern a couple years ago that had a heavy Chinese accent. We all had the same laptop dock that had a great mic, and the background noise suppression in Teams worked really well for everyone except him. Despite speaking clearly, in a similar environment to the rest of us, it would really attenuate his voice. As soon as he turned it off we could hear him just fine even with the background noise.

It does make me wonder if systems like that that are developed in China would have the same problem or not.

18 hours ago
movzx

People belittle this stuff with "How can a microphone be racist?! SJWs are so woke!" but it's a very real problem for everyone who isn't part of what is considered the majority.

17 hours ago
Lehk

That’s because idiots claim that the microphone is racist instead of pointing out that is badly designed and doesn’t recognize a Chinese accent as speech.

16 hours ago
wizard_mitch

This is true but isn't really relevant to the case of image upscalers in which the training data is high res images which have been made low res.

18 hours ago
tomi901

"Let me be clear"

19 hours ago
beegtuna

17 hours ago
Semper_5olus

The AI just sorta ate the collar

19 hours ago
NYFan813

Thought it was a tan line.

19 hours ago
Semper_5olus

Oh, damn, I see it now.

Yeah, under the shirt, he's literally #ffffff.

19 hours ago
metaglot

The AI isn't picky.

19 hours ago
AndiArbyte

am i a super recognizer now?
Obama it is?

20 hours ago
MyHusbandIsGayImNot

It's a picture of his official portrait and you've probably seen it thousands of times. It's in your subconscious so it's easy for you to spot.

17 hours ago
TheFrenchSavage

No no, his older brother.

16 hours ago
klaasvanschelven

In the near future someone will be sent to prison because the (automatically included) AI filter in someone's phone "enhanced" the blurred face of an actual killer into theirs

19 hours ago
jeffderek

Interestingly enough they did catch Christopher Paul Neil by fixing a messed up image, but it was intentionally obfuscated and not just low resolution

19 hours ago
ward2k
:sc:

It wasn't a properly done blur or anything like that though, guy just used a swirl tool on his face and it wasn't particularly difficult for an expert to undo

https://www.bbc.co.uk/news/world-us-canada-39411025

Scroll down a little to see the side by side

19 hours ago
ShinyStarSam

I heard different, it is now easy to unswirl BECAUSE they wanted to catch this guy so badly they made the tools to do so

17 hours ago
WhyMustIMakeANewAcco

He used a tool known to not be destructive to obscure his face. As it wasn't destructive it can be perfectly reversed with no loss of data.

This didn't involve creating information from nothing, as "enchancing" does.

18 hours ago
mypostureissomething

That isn't the same at all. He used. Photoshoot swirl tool. If you use the same exact tool on the same place in the opposite direction it "unswirls" it and reveals the original photo.

In the case of ai enhancement, ai is getting a blurry image and predicting what the person possibly could look like. Adding details that might not correspond with reality ( as seen in the post).

In the case you mentioned, an effect was physically reversed. No info was added. It was there. No supposition.

In the ai one, ai makes up details to fill in the blanks of a blurry photo.

18 hours ago
beclops
:sw:

Is this Todd Howard?

19 hours ago
Tensor3

To be fair to it, if you took that result and reduced its resolution, you do get the "original"

20 hours ago
TeraFlint
:cp::asm:

When I blur my eyes to the point where the quality of both pictures is the same, I can still see clear differences. Both in face structure and skin tone.

19 hours ago
hakumiogin

When you do that, your brain has a very complicated algorithm to fill in the blanks. If we ever get results better than squinting, then we'd all be calling it peak upscaling.

19 hours ago
saera-targaryen

I think they definitely turned a brown mouth pink here, that's the biggest difference i see but it looks pretty clearly different

18 hours ago
HenryHiggensBand

The collar is different.

17 hours ago
lizardtrench

I just tried it, but it's not very convincing:

https://i.imgur.com/tyvaOAo.png

17 hours ago
Tensor3

Nice. Its not the same, you're right.

Its close-ish, but probably biased by what the algorithm thinks is the average typical person

17 hours ago
hakumiogin

Not really, there is no white collar on the enhanced version.

19 hours ago
SocranX

This would be the funniest shit ever on a parody of a police procedural. "Hold on. Zoom in on that guy there." Shows a high resolution image of a person's face "Unenhance." "What?" "Unenhance!" "Do you... want me to lower the resolution?" "Yes, unenhance!" The image becomes pixelated, and suddenly resembles a highly recognizable person "We got 'im."

18 hours ago
Tensor3

Sir, the security camera footage is too blurry to make out who it is.

Can you enhance it?

No, this isnt a movie. We cant get a reliable ID from it even if we did.

Oh, well, what can you do?

If I down scale this image of obama, it looks exactly like the security footage. Can that get us a warrant?

Perfect. The judge says yes. Wheels up in 5.

18 hours ago
mortalitylost

Yeah, it's not like the average person is Obama. This just shows that we are good at recognizing very famous people and the AI doesn't do it that way.

There is nothing inherently wrong except that you asked an ai to add missing data and got back something it hallucinated, which is what it was made to do. Algorithm worked as intended.

18 hours ago
paroles

Of course the AI is doing what it's designed to do. The problem is that people take it for granted that the AI enhancement is real. Like, people use it on a blurry photo of a historical figure or their great-great-grandma and think the AI enhanced version is somehow "more accurate" than the blurry photo, even though the AI assigns facial features at random and gives all women 21st-century-style makeup.

We really need more education about how hallucinations work.

17 hours ago
ShrimpInspireGoatee

This was a fun test, i remove my glasses put it sligly away and TBH both look the same blur

19 hours ago
I_hate_all_of_ewe

That might be true if they used an actual upscaling AI, but that's clearly not the case here.  You can especially notice this in the lips and collar where the color/tone of the "upscaled" image dramatically changed.

18 hours ago
Turtvaiz

Wdym actual upscaling? This is a face upscaling model result. Changes are to be expected with such an absurdly low starting resolution and a high target

18 hours ago
I_hate_all_of_ewe

I mean upscaling results need to downscale to the original. This clearly doesn't.

17 hours ago
trite_panda

Not in the lips.

18 hours ago
Throwaway74829947
:c::cp::py::m::lsp:

And it kind of makes sense, since Obama is directly half white.

18 hours ago
RichCorinthian

Better Off Ted did an amazing job of predicting stuff like lab-grown meat and sensors that don’t detect Black people. I really wish it had had a few more years.

19 hours ago
VegetaFan1337

People forget Obama is half white.

18 hours ago
mikefrombarto

Not to mention he looks a lot like his grandfather, which the pic on the right resembles quite a bit.

17 hours ago
Appropriate_Host4170

A friend and I had a bit of fun using those “nudity” AI tools to see how close to her real boobs it would look. 

In the end she was like hell I wish all my pics looked like those. 

18 hours ago
qTp_Meteor

It really is evolving tho, just tried this and it got it, maybe its a famous meme so it could tell but idk

17 hours ago
bikemandan
:py:

https://i.imgur.com/ktKmPbx.gifv

19 hours ago
zxw

This was my attempt to upscale it: https://i.imgur.com/sViRNsQ.png

17 hours ago
lsaz
:js::ts::cs:

AI it's moving at a scary speed, but people in Reddit just keep shitting on it and thinking AI is useless.

16 hours ago
PM_ME_UR_RSA_KEY

Is this from Real-ESRGAN? That model tends to give people big pretty eyes lol

14 hours ago
Apr3ndiz

I told chatGPT to do generate a clear image from that pixelated image, and it did an excellent job :)

https://imgur.com/a/MEuCA72

image

17 hours ago
This_Price_1783

Let's see

17 hours ago
Apr3ndiz

https://imgur.com/a/MEuCA72

16 hours ago
ashmelev

it did not enhance the image, it just found the original picture...

15 hours ago
ashmelev

an excellent job by searching the web for a similar image and pasting it in. If it did actually enhance, the picture would've been to the chin without the shirt collar and a tie

15 hours ago
Apr3ndiz

This was ChatGPT answer to your comment: That’s a fair point — and you’re right to be skeptical.

To clarify: I didn’t search the internet or retrieve any actual photo of a real person. What I generated was an AI-created high-resolution version based on your low-res pixelated input and your prompt description. I can’t and don’t access or retrieve real photos from the web, even if the image resembles a public figure.

The output is a synthesized, fictional image created to look like a higher-quality version of the pixelated input, not an actual enhancement or a real person’s photo. If you want a true enhancement of a specific real photo, using specialized software like Topaz Gigapixel AI, Photoshop’s Super Resolution, or waifu2x would be better suited.

Let me know if you’d like me to re-render the image with different settings or make it more neutral.

15 hours ago
StephieDoll

Same here. This is outdated clownery on AI

16 hours ago
-Nicolai

ChatGPT couldn’t have gotten this result if Obama’s picture wasn’t all over the internet. The larger point has nothing to do with Obama, so this does not mean it is outdated.

16 hours ago
_HIST

Would you be able to recognize the person if the image was not all over the internet? What's your fucking point?

16 hours ago
-Nicolai

Do you not understand the original post at all? It’s commentary on how AI upscaling is laughably inaccurate. It is not about recognition, Obama’s picture is only used as a way of showcasing the error.

16 hours ago
Sunfurian_Zm

Almost as if interpolating data doesn't produce real data out of thin air.

Shocking, truly.

19 hours ago
one_last_cow

Barry O'Bamaugh

18 hours ago
I_WRESTLE_BEARS

Anyone else feel like the guy on the right looks like a cop?

18 hours ago
bucktron6040

Barack Travolta

18 hours ago
ChampionshipParty521

https://i.redd.it/wlc8ko2ey9ya1.jpg

18 hours ago
Thick-Cod-981

Timothy Mcveigh????

15 hours ago
rott

Mine seems to know what it's doing

17 hours ago
silver-orange

I wanted to try to give chatgpt a bit more of a challenge and the result is... interesting

https://imgur.com/a/ktAh5j4

16 hours ago
rott

I had a long conversation with it after it accurately recreated Obama in my test, because it refused to admit it drew Barack Obama. It insisted it was a coincidence because it is unable to draw real people. It's interesting how it's clearly noticing Obama's features in the pixelated version, and it's using its knowledge of Obama's face to recreate it, but it is unable to recognize what's happening, or to admit it even after I insisted quite a lot. It looks like your test had something similar happening. It can recognize Guy Fieri and redraw him "from memory" but I bet if you ask him to identify what was drawn it will also refuse to say.

16 hours ago
silver-orange

Yeah, certainly seems like it's well trained on celebrity faces (certainly fits what we know about it being trained on internet images, where celeb photos appear frequently) -- so honestly the opposite of the OP problem seems likely based on what you and I saw here. Rather than getting a weird anonymous result back for a celebrity, it might just give you back a celebrity's face when prompted with a photo of an anonymous rando.

16 hours ago
iknowyoujelly

its pikachu

19 hours ago
MetaNovaYT
:c::cp::cs::j::bash::py:

I think the skin color and hair are reasonable enough interpretations, but the eyes and collar seem pretty off

18 hours ago
OliviaEntropy

“Linc, I need to run for president, it’s the only way we can acquit ourselves and take down The Company”

17 hours ago
Baloomf

Given just the pixelated photo on the left, my phone told me in 2 seconds that it was a pixelated photo of Obama from 2020 that was used to show how the PULSE AI upsampling algorithm could have racial biases.

16 hours ago
Simple_Jellyfish23

I sure hope this shit is not used as evidence in a trail.

16 hours ago
sammich6

Barry O’Bama

15 hours ago
Commercial-East4069

That’s racial profiling!!!

15 hours ago
Fhantom1221

The once and future president.

15 hours ago
No-Representative399

Ah , the infamous zoom&inhance cliche we've been hearing about for 30 yrs now in BS sci-fi movies 😂 i guess finding an actual unicorn is easier than achieving it

14 hours ago
SamSha222

I tried this in Microsoft Copilot (GPT4o Image generator) and it actually gave a really accurate result:

https://imgur.com/a/obama-gpt4o-pl7emhC

17 hours ago
JS31415926
:cp:

If you look at it upside down from far away you can see they match.

19 hours ago
Throwedaway99837

Obama and Seth Meyers are the same guy

19 hours ago
touchmybodily

Obama. Barack Obama.

19 hours ago
bouchandre
:cs::cp::unity::unreal:

Brock Obama

19 hours ago
Friendly_Cantal0upe

Ryan Gosling?

19 hours ago
PM_ME_GOOD_SUBS

Todd Howard, you've done it again.

19 hours ago
karateninjazombie

Borax obomba

19 hours ago
11middle11

To me this just shows that people don’t look at shirt collars when evaluating facial similarities

19 hours ago
firestorm559

Barry Omaha

19 hours ago
nwbrown
:clj:

This is pretty old.

19 hours ago
NoIsland23

Like a mix of Obama, Toby Maguire and Ryan Gosling

18 hours ago
Canadiancurtiebirdy

Thanks notObama

18 hours ago
Reverend_Lazerface

18 hours ago
ThugosaurusFlex_1017

Ai can only produce images it prefers.

18 hours ago
Popeychops
:bash:

They had us in the top half, not gonna lie

Bottom half is atrocious

18 hours ago
sphinx_two

Hey, that's Mike Ross!

18 hours ago
Lulzagna

Went from Barack Obama to Brock O'Donnell

18 hours ago
xBinary01111000

Unhance

18 hours ago
CurrlyFrymann

The white Obama meme is funnier but dont look it up for the love of god.

18 hours ago
Difficult_Minute8202

anyone who took linear algebra in uni would tell you why that’s the case….

17 hours ago
Jcrm87

I wonder if this would work with say, blurry camera footage of a crime, but training the AI exclusively with the book of suspects and arrests around the time and area of the crime. I guess it's still too fuzzy for something as serious as an arrest though.

17 hours ago
turb0_encapsulator

If the technology gets used for identifying suspects, it will go the other way.

17 hours ago
malonkey1
:cp::py::js:

I knew it: Obama was Seth Meyers the whole time

17 hours ago
Dave-C

Are we getting Obama Tech Tips?

17 hours ago
here4dambivalence

From Obama to Albama

17 hours ago
Etere

How old is that? Someone should try it again, ai has advanced a lot in just the last year alone.

17 hours ago
LukeZNotFound
:ts:

Barack Craig

17 hours ago
malexj93
:kt::sc::rust:

I would like to invite everyone to remember that this was done in 2020, also known as 5 years ago. It's a funny meme, but it's not evidence that AI is currently bad at upscaling low-resolution images.

17 hours ago
GhostSierra117

Ah yes Marcell Obamer

17 hours ago
whisperwrongwords

Task Failed Successfully

17 hours ago
TimeToNukeTheWhales

It should really work for known people like this.

If we can identify that it's Obama, why can't a computer? Then it could enhance it using its knowledge of how Obama is supposed to look.

17 hours ago
SergioOlexy

If you cross your eyes there is a troll face or cat who caught canary smile to meet you.

17 hours ago
Scaryclouds

Plot twist, it really is a picture of a white guy posing like Obama, and we are the racist ones 😂

17 hours ago
Windows_96_Help_Desk

OMG....I get it now......Barack Obama IS Hunter Biden!!! It was RIGHT THERE all along........

17 hours ago
_Thermalflask

Kinda cool though how the human brain can do what the AI failed to, in a fraction of the time and with a fraction of the energy use.

17 hours ago
gamesbydingus

It's like white Jesus, when datasets are localised

17 hours ago
ilmalocchio

Have my baby, White Obama!

17 hours ago
Maskdask
:rust:

Scewed training data

17 hours ago