[1] Large language models may become an important component in whatever comes next, but I think we still need a component that can do proper reasoning and has proper memory not susceptible to hallucinating facts.
You might even say LLMs are good with text in the same way that early automobiles were good for transportation, provided you watched out for the potholes and stream crossings and didn't try to cross the river on the railroad bridge. (DeLoreans are said to be good at that, though :).)
It seems a matter of perspective to me whether you call it "dead end" or "stepping stone".
To give some pause before dismissing the current state of the art prematurely:
I would already consider LLM based current systems more "intelligent" than a housecat. And a pets intelligence is enough to have ethical implications, so we arguably reached a very important milestone already.
I would argue that the biggest limitation on current "AI" is that it is architected to not have agency; if you had GPT-3 level intelligence in an easily anthropomorphizeable package (furby-style, capable of emoting/communicating by itself) public outlook might shift drastically without even any real technical progress.
I do suspect this is only achieveable because the model was specifically trained for this.
But the same is true for humans; children can't really "reason themselves" into basic arithmetic-- that's a skill that requires considerable training.
I do concede that this (learning/skill aquisition) is something that humans can do "online" (within days/weeks/months) while LLMs need a separate process for it.
> in a strong version of this test I would want nothing related to long multiplication in the training data.
Is this not a bit of a double standard? I think at least 99/100 humans with minimal previous math exposure would utterly fail this test.
The models can do surprisingly large numbers correctly, but they essentially memorized them. As you make the numbers longer and longer, the result becomes garbage. If they would actually reason about it, this would not happen, multiplying those long numbers is not really harder than multiplying two digit numbers, just more time consuming and annoying.
And I do not want the model to figure multiplication out on its own, I want to provide it with what teachers tell children until they get to long multiplication. The only thing where I want to push the AI is to do it for much longer numbers, not only two, three, four digits or whatever you do in primary school.
And the difference is not only in online vs offline, large language models have almost certainly been trained on heaps of basic mathematics, but did not learn to multiply. They can explain to you how to do it because they have seen countless explanation and examples, but they can not actually do it themselves.
IMO "ability to communicate" is a somewhat fair proxy for intelligence (even if it does not capture all of an animals capabilities), and current LLMs are clearly superior to any animal in that regard.
An interesting experiment would be to have a robot with an LLM mind and see what things it could figure out, like would it learn to charge itself or something. But personally I don't think they have anywhere near the general intelligence of animals.
Only a very small % of the population is leveraging AI in any meaningful way. But I think today's tools are sufficient for them to do so if they wanted to start and will only get better (even if the LLMs don't, which they will).
When I wrote dead end, I meant for achieving an AI that can properly reason and knows what it knows and maybe is even able to learn. For finding stuff in heaps of text, large language models are relatively fine and can improve productivity, with the somewhat annoying fact that one has to double check what the model says.
> When early automobiles began appearing in the 1890’s — first steam-powered, then electric, then gasoline –most carriage and wagon makers dismissed them. Why wouldn’t they? The first cars were: Loud and unreliable, Expensive and hard to repair, Starved for fuel in a world with no gas stations, Unsuitable for the dirt roads of rural America
That sounds like complaints against today's LLM limitations. It will be interesting to see how your comment ages in 5-10-15 years. You might be technically right that LLMs are a dead end. But the article isn't about LLMs really, it's about the change to an "AI" world from a non-AI world and how the author believes it will be similar to the change from the non-car to the car world.
That last one isn’t useful to society, but it is for the individual.
I know plenty of people using LLMs using for stuff like this, in all sorts of walks of life.
edit (it's late, I'm just being a snark. I don't think researchers whose job is implicitly tied to hype is a good example of a worker increasing their productivity)
But it doesn't understand. Its just similarity and next likely token search. The trick is that turns out to be useful or pleasing when tuned well enough.
How many people read a version of the same story and pivoted their company to focus on SecondLife, NFTs, blockchain or whatever else technology was hyped at the time and tanked? That's the other half of this story.
- Virtual worlds / metaverses
You can replicate real life, but it's kind of boring.
- 3D printing
Became a useful industrial tool, but home 3D printing never went mainstream. At one point Office Depot offered 3D printing. No longer.
- Self-driving minibuses
Several startups built these, and some were deployed. Never really caught on. You'd think that airport parking shuttles and such would use these, but they don't.
- Small gas turbines
Power for cars, buses, trucks, backup power, and other things where you need tens to hundreds of kilowatts in a small package. All those things were built and worked. But the technology never became cheap. Aircraft APUs for large aircraft and the US Army's M1 tank variants remain one of the few deployed applications. The frustration of turbine engines is that below bizjet size, smaller units are not much cheaper.
- 3D TV
That got far enough that 3D TV sets were in stores. But they didn't sell.
- Nuclear power
Works, mostly, but isn't really cost-effective. Failures are very expensive and require evacuating sizable areas.
- Proof of correctness for programs
After forty years, it's still a clunky process.
- Maglev trains
Works, but insanely expensive.
- The Segway
Works, but scooters do the same job with less expense.
- 3D input devices
They used to be seen at trade shows, but it turns out that they don't make 3D input easier.
It's quite possible to guess wrong.
Everyone is jumping on the AI train and forgetting the fundamentals.
The only vaguely plausible and credible argument I can entertain is the one about AI being too expensive or detrimental to the environment, something which I have not looked sufficiently into to know about. Other than that, we are living so far off in the future, much more than I ever imagined in my lifetime! Wherever I go I see processes which can be augmented and improved though the use of these technologies, the surface of which we've only barely scratched!
Billions are being poured trying to use LLMs and GenAI to solve problems, trying to create the appropriate tools that wrap "AI", much like we had to do with all the other fantastic technology we've developed throughout the years. The untapped potential of current-gen models (let alone next-gen) is huge. Sure, a lot of this will result in companies with overpriced, over-engineered, doom-to-fail products, but that does not mean that the technology isn't revolutionary.
From producing music, to (in my mind) being absolutely instrumental in a new generation of education or mental health, or general support for the lonely (elderly and perhaps young?), to the service industry!...the list goes on and on and on. So much of my life is better just with what little we have available now, I can't fathom what it's going to be like in 5 years!
I'm sorry I highjacked your comment, but it boggles the mind how so many people so adamantly refuse to see this, to the point that I often wonder if I've just gone insane?!
Then there's the question whether a highly advanced AI is better at hiding unwanted "features" in your products than you are at finding them.
And lastly, you've went to great lengths of completely air gapping the systems holding your customers' IP. Do you really want some Junior dev vibing that data into the Alibaba cloud? How about aging your CFO by 20 years with a quote on an inference cluster?
Unreliability and difficulty reasoning about potential failure scenarios is tough. I've been going through the rather painful process of taming LLMs to do the things we want them to, and I feel that. However, for each feature, we have been finding what we consider to be rather robust ways of dealing with this issue. The product that exists today would not be possible without LLMs and it is adding immense value. It would not be possible because of (i) a subset of the features themselves, which simply would not be possible; (ii) time to market. We are now offloading the parts of the LLM which would be possible with code to code — after we've reached the market (which we have).
> Then there's the question whether a highly advanced AI is better at hiding unwanted "features" in your products than you are at finding them.
I don't see how this would necessarily happen? I mean, of course I can see problems with prompt injection, or with AIs being lead to do things they shouldn't (I find this to be a huge problem we need to work on). From a coding perspective, I can see the problem with AI producing code that looks right, but isn't exactly. I see all of these, but don't see them as roadblocks — not more than I see human error as roadblocks in many cases where these systems I'm thinking about will be going.
With regards to customers' IP, this seems again more to do with the fact some junior dev is being allowed to do this? Local LLMs exist, and are getting better. And I'm sure we will reach a point where data is at least "theoretically private". Junior devs were sharing around code using pastebin years ago. This is not an LLM problem (though certainly it is exacerbated by the perceived usefulness of LLMs and how tempting it may be to go around company policy and use them).
I'll put this another way: Just the scenario I described, of a system to which I upload a video and ask it to comment on it from multiple angles is unbelievable. Just on the back of that, nothing else, we can create rather amazing products or utilities. How is this not revolutionary?
I'm old enough to remember when "big data" and later "deep data" was going to enable us to find insane multi-variable correlations in data and unlock entire new levels of knowledge and efficiency.
AI as currently marketed is just that with an LLM chatbot.
They're at risk of what? It's easy to hand-wave about disruption, but where's the beef?
I can tell you for sure they did not have a Blockchain strategy, and they turned out just fine.
This is an interesting phenomenon that probably has no historical equivalent and hence may not have been fully contemplated in any literature, and so comparisons like TFA fall short of capturing the full implications.
Whether these companies see themselves an AI company seems orthogonal to the fact that they should acknowledge this sea-change and adapt. However, currently all industries seem to be thinking they should be an "AI company" and are responding by trying to stuff AI into any product they can. Maybe the urgency for them to adapt should be based on the degree to which knowledge work is critical to their business.
I like this quote. But this analogy doesn’t exactly work. Withe this hype cycle, CEOs are getting out and saying that AI will replace humans, not horses. Unlike previous artisans making carriages, the CEOs saying these things have very clear motivations to make you believe the hype.
Cynically, there's no difference from a CEO's perspective between a human employee and a horse
They are both expenses that the CEO would probably prefer to do without whenever possible. A line item on a balance sheet, nothing more
The median CEO salary is in the millions, they do not have to ever worry about money again if they can just stick around for one CEO gig for a couple of years
Granted, people who become CEOs are not likely to think this way
But the fact is that when people have so much money they could retire immediately with no consequences, they are basically impossible for a business to hold accountable outside of actual illegal activity
And let's be real. Often it's difficult to even hold them accountable for actual illegal activity too
False. CEOs are held accountable all the time. At the extreme end, research shows that 1 in 3 CEOs are fired within 18 months.
Termination is the end result of a process
It is not unreasonable to think that is an accountability process of some sort...
I am talking about being accountable to society
And the size of the parachute they get when they're tossed from the plane? I know that there are many small companies with someone in a "CEO" position who might not be hugely compensated, but speaking of CEOs at major corporate ventures here, as is commonly understood when one talks about questions of executive responsibility (or lack thereof), let's be real on some actual severance figure averages for a clearer picture of consequences and "punishment".
Incentives for CEOs and Executives are just way different, which is actually a huge part of the problem we face in society
We are run into the ground for profit by people who think the purpose of life is to profit
>Body by Fisher
which had an image of the carriages which they had previously made.
And conversely, people who fear that they might be replaced have very clear motivations to claim that AI is useless.
That's what I see with AI. Every company wants to suddenly "be an AI company", although few are sure what that means. Companies that were legitimately very good at a specific thing are now more interested in being mediocre at the same thing as everyone else. Maybe this will work out in the long tun but right now it's a pain in the ass.
There's an entire shelf devoted to "disruption."
This is how VCs destroy businesses by bring in adult supervision. CEOs are not incentivized to play the long game.
The difference between the mobility & transportation industry, whether it by carriage and horse, or motor car, was that it was in demand by 99% of the population. AI, on the other hand, is only demanded by say 5%-10% of the population. How many people truly want an AI fridge or dishwasher? They just want fresh food and clean dishes.
There is no hope, after all :(
I wonder if there is something noteworthy about Studebacker - yes, they were the only carriage maker out of 4000 to start making cars, and therefore the CEO "knew better" than the other ones.
But then again, Studebacker was the single largest carriage maker and a military contractor for the Union - in other words they were big and "wealthy" enough to consider the "painful transformation" as the article puts it.
How many of the 3999 companies that didn't acutally had any capacity to do so?
Is it really a lesson in divining the future, or more survivorship bias?
Which company is that, you ask? My last name is Maxwell.
(But afaik, none of my ancestors owned or even worked for that car company.)
I on the other hand, see the exact opposite happening. AI is going to make people even more useful, with significant productivity gains, in actuality creating MORE WORK for humans and machines alike to do.
Leaders who embrace this approach are going to be the winners. Leaders who continue to follow the hype will be the losers, although there will probably be some scam artists who are winners in the short term who are riding the hype cycle just like crypto.
There were some classes of combustion engines that smaller shops did manufacture, such as big hot-bulb engines for ships and factories. Miniaturised combustion engines or electric motors are not suitable for craftsman-like building but rather standardised procedures with specialised machines.
The main mechanism is not "disruption" but rather a trend of miniaturisation and mass production.
We tag “complacency” as bad, but I think it’s just a byproduct of our reliance on heuristics and patterns which is evolutionarily useful overall.
On the other hand we worry (sometimes excessively) about how the future might unfold and really much of that is unknown.
Much more practical (and rewarding) to keep improving oneself or organisation to meet the needs of the world today withe an eye on how the world is evolving, rather than try to be some oracle or predict too far out (in which case you need to both get the prediction and the execution right!).
As an aside, it seems a recent fashion to love these big bets these days (AI, remember Metaverse), and to make big high conviction statements about the future, but that’s more to do with their individual specific circumstances and motivations.
The history of those is the big untold story here.
It doesn't help if you're betting on the right tech too early.
Clearly superior in theory, but lacking significant breakthroughs in battery reasearch and general spottyness of electrification in that era.
Tons of Electric Vehicle companies existed to promote that comparably tech.
Instead the handful of combustion engine companies drove everyone else out of the market eventually, not last gasoline was marketed as more manly.
https://www.theguardian.com/technology/2021/aug/03/lost-hist...
Lots of ideas that failed in the first dotcom boom in the late 1990s are popular and successful today but weren't able to find a market at the time.
At my company, "General Manager" positions were the ones that actually set much of the planning priorities. Many of them, eventually got promoted to VP, and even, in the case of my former boss, the Chairman of the Board.
When the iPhone came out, one of my employees got one (the first version). I asked to borrow it, and took it to our Marketing department. I said "This is gonna be trouble for us."
I was laughed out of the room. They were following the strategy set down from the General Managers, which involved a lot of sneering at the competition.
The iPhone (and the various Android devices that accompanied it), ate my company for breakfast, and picked their teeth with our ribs.
A couple of the GMs actually anticipated the issues, but they were similarly laughed out of their rooms.
I saw the same thing happen to Kodak (the ones that actually invented digital photography), with an earlier disruption. I was at a conference, hosted by Kodak, and talked to a bunch of their digital engineers and Marketing folks.
They all had the same story: They were being deliberately kneecapped by the film people (with the direct support of the C-Suite).
At that time, I knew they were "Dead Man Walking." That was in 1996 or so.
Wish I'd bookmarked them; some great reading in those
> The first cars were expensive, unreliable, and slow
We can say the same about the AI features being added to every SaaS product right now. Productization will take a while, but people will figure out where LLMs add value soon enough.
For the most part, winning startups look like new categories rather than those beating an incumbent. Very different than SaaS winners.
TV networks, relative to Netflix is another.
And who can forget BlackBerry?
Let's see a similar story for, say, dirigibles.
_____
The first cars were:
- Loud and unreliable
- Expensive and hard to repair
- Starved for fuel in a world with no gas stations
- Unsuitable for the dirt roads of rural America
_____
Reminds me of Linux in the late 90s. Talking to Solaris, HPUX or NT4 advocates, many were sure Linux was not going to succeed because:
- It didn't support multiple processors
- There was nobody to pay for commercial support
- It didn't support the POSIX standard
Actually, gasoline was readily available in its rôle as fuel for farm and other equipment, and as a bottled cleaning product sold at drug stores and the like.
>- Unsuitable for the dirt roads of rural America
but the process of improving roads for the new-fangled bicycle was well underway.
The areas where it does make sense to use, it's been in use for years, if not longer, without anyone screaming from the rooftops about it.
Great line.
Why didn't all the carriage makers (400+) become Ford, General Motors and Chrysler? Why didn't hundreds of catalogue sales companies become Amazon? Why didn't hundreds of local city taxi services become Uber and Lyfe.
Hint: there's hundreds on one side of these questions and a handful on the other.
Beyond the point that a future market doesn't necessary have space for present players, the "Oo, look how foolish, they missed the next wave" articles miss the point that present businesses exist to make money in the present and generally do so. If you're horseshoe maker, you may know your days are numbered but you have equipment and you're making money. Liquidating to jump into this next wave may not make any sense - make your product 'till demand stops and retire. Don't reinvest but maybe raise prices and extract all you can from the operation now. Basically, "failed to pivot" applies to startups that don't have a capital investment and income stream with a given technology. If you have those, speculative pivoting is ignoring your fiduciary duty to protect that stuff while it's making making even if the income stream is declining.
And sure, I couldn't even get to the part about AI this offended most economist part so much...
If this was published a few months ago, it would be telling everyone to jump into web3.
My bank transfers within the country cost me nothing to send or receive, for example.
With the carriage / car situation, individual transportation is their core business, and most companies are not in the field of Artificial Intelligence.
I say this as someone who has worked for 7 years implementing AI research for production, from automated hardware testing to accessibility for nonverbals: I don't think founders need to obsess even more than they do now about implementing AI, especially in the front end.
This AI hype cycle is missing the mark by building ChatGPT-like bots and buttons with sparkles that perform single OpenAI API calls. AI applications are not a new thing, they have always been here, now they are just more accessible.
The best AI applications are beneath the surface to empower users, Jeff Bezos says that (in 2016!)[1]. You don't see AI as a chatbot in Amazon, you see it for "demand forecasting, product search ranking, product and deals recommendations, merchandising placements, fraud detection, translations."
[1]: https://www.aboutamazon.com/news/company-news/2016-letter-to...
I'm missing something here. First, I thought Steve's point was that the carriage makers did not see "individual transportation" as their business, and they should have--if they had, they might have pivoted like Studebaker did.
So if "most companies are not in the field of Artificial Intelligence", that could mean that they ought to be.
However, I draw a somewhat different conclusion: the business that companies ranging from Newsweek to accountants to universities to companies' HR departments should see themselves in is intelligence, regardless of whether that's artificial or otherwise. The question then becomes which supplies that intelligence better: humans or LLM-type AI (or some combination thereof)? I'm not at all sure that the answer at present is LLM-AI, but it is a different question, and the answer may well be different in the near future.
There are of course other kinds of AI, as you (jampa) mention. In other words, AI is not (for now) one thing; LLMs are just one kind of AI.
The implication of the author here is that those providing services that continue using human resources rather than AI, are potentially acting like carriage manufacturers.
Of course that assumes improvements in technology, which is not guaranteed.
But all 400+ carriage maker had pivoted, would they have had a chance to survive very long? Would they have all made more money pivoting? The idea that all this is only a "lack of vision" rather than hard business choices is kind of annoying.
Im guessing for someone in laundry or restaurant business it might be hard to understand how AI could change their lives. And that is true, at least at this stage in the adoption and development of AI. But eventually it will find a way into their business in some form or the other.
There are stages to this. Pretty sure the first jobs to go will be the most easiest. This is the case with Software development too. When people say writing code has gotten easier, they really are talking about projects that were already easy to build getting even more easier. Harder parts of software development are still hard. Making changes to larger code bases with a huge user base comes with problems where writing code is kind of irrelevant. There are bigger issue to address like regression, testing, stability, quality, user adoption etc etc.
Second stage is of course once the easy stuff gets too easy to build. There is little incentive to build it. With modern building techniques we aren't building infinite huts, are we? We pivoted to building sky scrapers. I do believe most of AI's automation gains will be soaked up in the first wave and there will little incentive to build easy stuff and harder stuff will have more productivity demands from people than ever before.
If we're going to talk cars, I think what the Japanese did to the big three in the 1980s would have been far more on point.
AI is encumbered by AGI which is further encumbered by the delta between what is claimed possible (around the corner) and what is. That's a whole different ball game with wildly different risk/reward tradeoffs.
Learning about history post buggies didn't do much for me.
If we laboriously create software shops in the classical way, and suddenly a new shop appears that is buggy, noisy, etc but eventually outperforms all other shops, then the progenitors of those new shops are going to succeed while the progenitors of these old shops are not going to make it.
It's a strain. The problem is AI is a new tech that replaces an entire process, not a product. Only when the process is the product (eg the process of moving people) does the analogy even come close to working.
I'd like to see analysis of what happened to the employees, blacksmiths, machinists, etc. Surely there are transferrable skills and many went on to work on automobiles?
This SE q implies there was some transition rather than chaos.
https://history.stackexchange.com/questions/46866/did-any-ca...
Stretching just a bit further, there might be a grain of truth to the "craftsman to assembly line worker" when AI becomes a much more mechanical way to produce, vs employing opinionated experts.
AGI + robot is way beyond a mere change in product conception or implementation. It's beyond craftsmen v. modern forms of manufacturing we sometimes read about with guns.
It is a strain indeed to get from cars v.buggies to AGI. I dare say that without AGI as part and parcel to AI the internalization of AI must be necessarily quite different.
Agreed. The analogy breaks down because the car disrupted a single vertical but AI is a horizontal, general-purpose technology.
I think this also explains why we're seeing "forced" adoption everywhere (e.g., the ubiquitous chatbot) -- as a result of:
1. Massive dose of FOMO from leadership terrified of falling behind
2. A fundamental lack of core competency. Many of these companies companies (I'm talking more than just tech) can't quickly and meaningfully integrate AI, so they just bolt on a product
For a bunch of stuff - banks, online shopping, booking a taxi, etc - this shift already happened with non-LLM-based "send me notifications of unusual account activity" or even the dead-simple "send me an email about every transaction on my bank account." Phone notifications moved it from email to built-into-the-OS even.
The "LLM hype cycle" tweak becomes something like "have an LLM summarize the email instead of just listing the three transactions" which is of dubious use to the average user.
Banks: Normal retail customers are responsible for managing their account balances, importing transaction data into whatever book keeping system, downloading their tax forms for filing, adjusting their services and strategy based on whatever they're planning to do in their life etc. Private banking is a reasonable model for the service that everyone should get, but can't because it's too expensive.
Online shopping: Most people have to figure out what they're looking for, research the options, figure out where to order from, keep track of warranties, repairs, returns, recalls, maintenance, consumables, etc. Personal assistants can absorb most of that, but that's expensive.
Booking a taxi: On the same theme, for all the scheduled travel that should be booked and ready to go based on your calendar. Personal assistants can do this too, but again it's expensive.
The core ideas of giving the service provider context, guidance, and autonomy to work without regular intervention are not unique to automation but only recently is there a conceivable path to building software that can actually deliver.
Not this time, tho. ChatGPT is the iphone moment for "AI" for the masses. And it was surprising and unexpected both for the experts / practitioners and said masses. Working with LLMs pre gpt3.5 was a mess, hackish and "in the background" but way way worse experience overall. Chatgpt made it happen just like the proverbial "you had me at scroll and pinch-to-zoom" moment in the iphone presentation.
The fact that we went from that 3.5 to whatever claude code thing you can use today is mental as well. And one of the main reasons we got here so fast is also "chatgpt-like bots and buttons with sparkles". The open-source community is ~6mo behind big lab SotA, and that's simply insane. I would not have predicted that 2 years ago, and I was deploying open-source LLMs (GPT-J was the first one I used live in a project) before chatgpt launched. It is insane!
You'll probably laugh at this, but a lot of fine-tuning experimentation and gains in the open source world (hell, maybe even at the big labs, but we'll never know) is from the "horny people" using local llms for erotica and stuff. I wouldn't dismiss anything that happens in this space. Having discovered the Internet in the 90s, and been there for every hype cycle in this space, this one is different, no matter how much anti-hype tokens get spent on this subject.
ChatGPT wasn’t the iphone moment, because the iphone wasn’t quickly forgotten.
Outside of software, most adult professionals in my network had a play with chatgpt and have long since abandoned their accounts. They can’t use chatbots for work (maybe data is sensitive, or their ‘knowledge work’ isn’t the kind that produces text output). Our native language is too poorly supported for life admin (no Gemini summaries or ‘help writing an email’). They just don’t have any obvious use case for LLMs in their life.
I guess there's a quiet majority thing going on where the vast majority of businesses are just not integrating chatbots because their business is not generating text.
I know an architect, after much encouraging her to use it. She said ChatGPT most of the times would make bedroom window into a rest room. Its kind of hilarious because guessing the next word, and spatial thinking seem to be very different beasts altogether. And in some way might be two different tracks of intelligence. Like two different types of AGI.
A picture is better than thousand words - A saying.
My guess is a picture is better than a infinite words. How do you explain something as it exists, you can use as many words, phrases, metaphors and similes. But really is it possible to describe something in words and have two different people, or even a computer program not imagine it very differently?
Another way of looking at this is language itself might be several layers below intelligence. If you see you can go close but never accurate describe what you are thinking. If that is the case we are truly cooked and might never have AGI itself as there is only that far you can represent something you don't understand by guessing.
There are a lot of great use cases for ML outside of chatbots
So at best technically correct on his part but still semantically incorrect
To be slightly provocative, most of the ML applications that are profitable are not chatbots.
To stay on Amazon, their product recommendations, ads ranking, and search likely make Amazon way more than their little AI summaries or Rufus chatbot.
So maybe your analysis is outdated?