my truck as rendered by dream.ai in “retro-futurism” style
We’ve had several posts here of car images rendered by AI software, either changing their style or placing them in a different time. That’s been a lot of fun, and I’m going to play with it more too, when I get more time. But the really big news in AI is ChatGTP, and open-software AI platform that has the potential to impact the written word profoundly, as it answers questions in a much more human form than Google, sounding very authoritative. It can write essays, poems, sonnets, and much more.
So I decided to test its automotive knowledge, in the fear (or hope?) that it could replace me. Here goes:
(Note: regular font are my questions, italics are the answers, and bold are my comments)
(Let’s start with an easy one) What year did Kaiser–Frazer start building cars?
Kaiser–Frazer started building cars in 1947.
Correct.
Why did General Motors go bankrupt?
General Motors declared bankruptcy in 2009 due to a combination of factors including a global economic recession, increasing competition from foreign automakers, high labor costs, and mismanagement of funds.
Not bad, for such a short answer.
What was the last year for the Chevrolet 409 V8 engine?
The last year for the Chevrolet 409 V8 engine was 1965.
Correct.
Was the Fiat 1400 sold in the US?
Yes, the Fiat 1400 was sold in the US from 1958 to 1960.
False. The 1400’s last year was 1957.
How much did the Fiat 1400 cost in the US in 1958?
The Fiat 1400 cost $1,935 in the US in 1958.
Obviously false, but the specific price is curious. Mixed up with another model? My Catalog of Imported Cars does not show any 1958 Fiat with that price.
What year did Ford first offer anti–lock braking?
Ford first offered anti–lock braking in 1978.
False. It was in late 1969.
How did Ford‘s Sure Track work?
Ford‘s Sure Track system used a combination of sensors, valves, and solenoids to continuously monitor wheel speed and adjust brake pressure when it detected wheel slippage. This allowed for controlled braking and improved vehicle stability.
Good enough.
What was the size of the engine in the Tatra 77?
The Tatra 77 had a 3.0 liter, V8 engine.
Correct, for the initial version. The 77A had a 3.4 L V8.
How fast could it go?
The Tatra 77 could reach a top speed of 118 mph (190 km/h).
False. More like 90 mph, and 93 mph for the 77A with the larger engine.
What Tatra patents did VW infringe upon?
Volkswagen infringed upon Tatra‘s patents for the air–cooled, rear engine design and the independent suspension system used for the Tatra 77.
That was only the claim, but not what was decided by the court.
What part of the independent suspension system?
The part of the independent suspension system that Volkswagen infringed upon was the swing axle rear suspension design.
That‘s not true, actually. (my response to ChatGTP)
Volkswagen didn‘t infringe upon the swing axle design, but rather the independent rear suspension design.
It’s hedging. Or BS’ing.
What aspect of that design in particular?
The aspect of the independent rear suspension design that Volkswagen infringed upon was the arrangement of the rear axle and differential.
False. Those were included in the claims, but the only one VW was held to be infringement on was certain details of the air cooling ducts, a point that was not given in the answers by ChatGTP. But then this is an issue that has been widely written up incorrectly.
When was that issue settled?
The issue was settled in 1961 when Volkswagen agreed to pay Tatra a one–time fee of 3 million Deutsche Marks in compensation.
Correct.
Why Did Chrysler downsize the 1962 Dodge and Plymouth?
Chrysler downsized the 1962 Dodge and Plymouth models in order to reduce the cost of production and to make the vehicles more competitive in the market. The cars were also made lighter and more fuel–efficient.
Wow! This is a huge surprise and better than 99.9% of automotive writers/journalists/bloggers would answer. ChatGTP nailed it, instead of the ubiquitous “Chrysler VP William C. Newberg unilaterally demanded this downsizing on a Monday after overhearing a comment from Ed Cole at a garden party/golf club about a downsized 1962 Chevy. This answer reflects the essence of my post on these cars that debunks that lame old story.
Were they successful?
Yes, the downsized 1962 Dodge and Plymouth models were successful and allowed Chrysler to remain competitive in the market.
Oops. Not exactly. But they weren’t quite as unsuccessful as often depicted, and after a bit of a restyle in 1963, they did become successful. In other words, it was their styling, not their trimmer size that impacted their success in 1962. And the B-Body did become a key element of their future success.
Why did the Corvair fail?
The Corvair failed due to its poor handling, lack of power, and a series of negative reports from consumer advocate Ralph Nader. The reports highlighted the Corvair‘s design flaws and highlighted its potential for rollover accidents.
ChatGTP wasn’t reading my posts on the subject, obviously. The gen1 Corvair’s handling was known to be something of an issue prior to Nader’s 1965 book, but not one that materially impacted its sales through 1965, which were actually quite good. By the time Nader’s book came out, the Corvair’s future was already sealed by the overwhelming success of the Mustang.
Did the Mustang impact the Corvair‘s sales?
Yes, the Mustang had a negative effect on the Corvair‘s sales. The Mustang was a popular, affordable sports car that was released around the same time as the Corvair and it quickly gained market share.
It was not “released around the same time as the Corvair”, but otherwise it’s a pass.
How did the Corvair influence the Mustang?
The Corvair‘s independent suspension system had a significant influence on the design of the Mustang. Ford engineers borrowed the Corvair‘s independent suspension design and adapted it for the Mustang.
In my industry (radio), the hubub is that this will eventual replace personalities writing blog posts by station personalities….
It isn’t there yet, but if it is cheap enough some owner or general manager will declare it good enough and fire the on air staff.
Thanks for the entertaining test drive of AI!
Don’t praise the machine!
Couldn’t resist 🙂
Yes, well, eventually – if not already – robots will be able to dance. At which point, those people who only want to watch dancers may be satisfied; but if one enjoys dancing, I’m not so sure that watching a machine do it quite fulfills.
That’s my story, and I’m sticking to it.
As an architect who writes the technical specifications for construction projects I can see how AI will create those specifications from the building information model (BIM). BIM is, after all, a data base and the process has already begun. I am confident though AI produced specifications, and especially the model itself, will not be “perfected” during the expected duration of my career. In other words, I expect to retire before I am truly rendered unnecessary.
Interesting.
I’d ask it how to drive a 1953 Chrysler PrestoMatic M6 semi-automatic transmission. (That’s probably an unfair question – most humans who know that answer are either long gone, or in the process of going.)
One of my favorite subjects is AI, to drive your car, fly your plane, run your home, solve your medical problems, or even to detect falls via a smart-watch.
This reminds me of something I learned over many years of developing, testing, and implementing computer systems (not necessarily AI software, but software never-the-less).
There are two types of computer software bugs.
1. Those you find.
2. Those you do not find.
If the software with hidden bugs is writing a CC column and a bug pop up, CC commenters (or the site owner) will very quickly set things straight, and as a result everyone is left off better informed (including the software).
If the software with hidden bugs is guiding a delicate surgical procedure, or flying an airliner, or driving your EV on at highway speeds, well, the solution can be a bit messier and possibly fatal to innocent victims.
The following is very old, but still funny (to me):
“Ladies and Gentlemen, we are introducing you today to the latest and newest aviation advancement in history. This plane is flying without a pilot or co-pilot. It is controlled by way of radio from the ground. Sit back and relax and enjoy your flight. Be assured that absolutely nothing can go wrong, absolutely nothing can go wrong, absolutely nothing can go wrong, absolutely nothing can go wrong. ~~~~~~~~~~” .
Ah, then you haven’t met my Mother, or Uncle, who learned to drive on my Grandfather’s 1951 Windsor…it was semi-automatic, though not sure about PrestoMatic M6.
My Uncle drove it almost all the way through undergraduate studies, it blew a head gasket and my Uncle didn’t have time to deal with it, he needed to finish his finals, and it was his senior year, he had a job lined up and qualified for a loan on a new ’69 Ford LTD…4 door hardtop, 302, drum brakes..
My Mother has never really been comfortable driving a manual…in fact when she and (same Uncle above) just them 2 took a trip to Eastern Europe in ’98, she wanted to be able to back him up..weird things seem to happen to my Uncle sometimes when he travels…so I took her to an abandoned Walmart parking lot and gave her a refresher. She never did have to drive on that trip. She finally gave up driving in 2021, so my sister and I take her where she needs to go (mostly to doctor appointments).
I think she may have taught my Grandfather as well. He owned a mom/pop grocery store with walk-up clientel, there was almost no parking near by, and in fact it was tough to stock his store, the trucks had to park halfway onto the sidewalk to unload, and others didn’t even try. So my Grandfather used it to pick up some of the items to sell in his store…that kind of surprised me, I thought he’d get a wagon of some sort but I guess the Chrysler was roomy enough. He never drove a car before buying his…neither of my Grandmothers ever learned how to drive (they did lots of walking and used public transportation as far as they could before their spouse eventually bought a car…my Grandfather was in his 50’s when he bought his 1st (and as it turned out only) car, the ’51 Windsor.
Having read many classic car books and articles over the past 40 years, I am always amazed at the number of factual errors, falsehoods, and opinions stated as fact. It’s disconcerting to realize that this is certainly true about books in other fields as well. So if AI’s answers are wrong, it’s probably a case of garbage in-garbage out.
AI should not replace CC writers, because CC is based on genuine, authentic personal stories and human reactions to cars. No one wants to read “canned”, artificial writing.
However, my father has recently suggested that we replace the government with a Watson-type AI decision-making computer. Yes–get rid of the president, Congress, the courts, and elections. Watson will make all governmental decisions that will maximize outcomes for all! At first, I was skeptical, but I’m slowing warming up to the idea . . .
Agreed about bad info out there. But I’ve never come across anyone saying that the Mustang’s rear suspension was a copy of the Corvair’s swing axles. That’s just goofy.
I don’t know what’s more impressive, that “AI” can write a poem about the Chevy V8, or that a human would think of asking that question. As a technologist, I’m impressed with the capabilities of this software; as a human, I’m glad I’m old enough that I’ll probably never use it.
I still think this is one of the more interesting aspects of GPT-3:
When was curbsideclassic.com founded, by whom, and what is in it?
CurbsideClassic.com was founded in 2009 by Paul Niedermeyer. The website features articles and photographs about classic and vintage cars.
Seems like GPT-3 had this info stored in its database, which is fun to think about.
That’s incorrect.
CC.com was founded in 2011. I started writing CC articles at TTAC in 2009.
It choked on:
What were the problems with the Chrysler Ultradrive transmission?
I’m sorry, but I am not able to browse the internet to research information on specific problems with the Chrysler Ultradrive transmission. As a large language model trained by OpenAI, my knowledge is limited to what I have been trained on, and I do not have the ability to browse the web to find new information. I can only provide information based on what I have been trained on, which has a knowledge cutoff of 2021. If you have any other questions, I’d be happy to try to help with those.
That’s surprising. Where else would it get all of its info?
I suspect it was fed a subset of automotive sites, since it does need a limited dataset to start. Crawling the entire internet and identifying which data is relevant to cars is probably asking a bit much at this stage.
Did not know about the “independent suspension” on the Mustangs.
Mustangs have had “independent rear suspension” since 2015, but not since 1964!
Ok, there may’ve been a special edition in the SN95 era that had IRS. The Cobra maybe? Anyway…
WAY too much time has elapsed for this to be influenced by the Corvair.
Oh, and there is no 1964-and-a-half Mustang! – But this is a pet peeve of mine, and not everyone else agrees with me.
The whole AI response related to the early Mustang was poor IMO. Rick you are correct there is no 1964 1/2 Mustang, but there were the early run cars which have distinct differences, produced from March 9th and July 31st 1964. They were not “special models” as the AI answer incorrectly states. There were some running production changes made when the second series 1965 models were introduced in August 1964, but they were minor, like alternator replacing the generator, the 260-2V replaced by the 289-2V, 170 six replaced by the 200, the hood edge was revised and lost the “stiffening skirt.” Most causal observers didn’t notice the differences.
That said, all of these early production or first run Mustangs were 1965 model year cars, not 1964 as the AI incorrectly answers. They were all sold as 1965 models, and all the VINs start with a “5,” which denotes the 1965 model year. Enthusiast’s labeled these first run 1965 Mustangs as 1964 1/2 simply to distinguish them from the later 1965 Mustangs due to the relatively high number of mid-run production changes. FWIW, Ford was infamous for making mid year production run changes on its cars. Anyone who owns old Fords knows that often you have to know your production date to order the correct part.
Ugh, yes…timing chains for Ford 289s! My engine fell very near the changeover date, so I had to be exact. As a kid, I was not, and installed the wrong one. A cacophony ensued from the timing cover.
Thanks Vince… and yes, Ford CONTINUES to do this.
I can’t even count the number of times that I need the specific VIN on my 2007 Mustang when I go to the parts counter at Packer/Norris Ford Parts. The reason cited by them is just what you say, “mid year production run changes”.
Artificial intelligence should become more and more important over time, as human intelligence seems to be disappearing.
The resident contrarian hereabouts (me) says that this is just one more thing in a long list of inventions that humanity was convinced would solve our problems. What will happen is that, while solving many of those problems, it will cause many more new problems that people have not thought of yet. And some of those new ones will be more serious than the ones that were solved.
Probably in the end AI will be like CNC machine tools. There will be an “operator” who monitors what it does, but the AI will do real work. Actual writers and stuff will be the same as machinists with Bridgeports are in modern machine shops – necessary but few in number and only used for custom one-off jobs.
I think that your analogy holds up well for commercial writing; but unlike machine shop work, writing is something that everyone should be able to do (and I would add, do well) whether or not its pursued as a vocation or art. It helps you think, it helps you communicate. Two very basic skills that all humans should hone and have.
Sadly though, I frequently encounter young people (students) who being well aware of the availability of various ways to get writing done…and AI is fully on their radar…ask in all seriousness “Why do I need to know how to write when there’s a machine that can do it?” Likewise, they also ask “Why do I need to know [fill in the blank] when I can just Google it?”. Right now, some teachers have responses to those questions, but I can see that it’s a very short road to there soon being no answer.
I’m afraid that this will happen sooner than we think.
Looks like your Ford pickup got converted into a mid-60’s Dodge, more or less, by the AI.
Replace the fine writing here at CC? I seriously doubt it. Do you think that this AI software could EVER be able to write an essay in the style of Joseph Dennis? Never. No way.
While this exercise did show some impressive artificial talent, as Paul pointed out, it got many of the facts quite wrong. The Mustang info was particularly cringeworthy.
That said, the SBC poem was probably spot on!
I also still have my doubts about AI driving us around. Someday, maybe, but it’s just not ready for primetime yet. My nephew trusts it a bit to much in his Model Y, which makes me nervous. And an in-law demonstrated it with me at the helm of his Model 3, and while it was impressive at neighborhood speeds (25 MPH) and even found (and parallel parked in) a parking space, when it almost hit the F-150 that was parked cattywampus, this left me with doubts. The Model 3 DID swerve, albeit at the last second, to avoid hitting the F-150, but I was all ready to grab the yoke. The owner of the car said, “Give it a chance! It’ll see the truck.” – Yeah, I’ll drive my own car thanks. Sorry for going off topic, albeit only tangentially.
Don’t need AI, just a script to write “GM sux”
The best part, IMO, of the AI rendering of Paul’s truck is that apparently in the future, all of our vehicle’s brand names will be in Klingon.
https://en.wikipedia.org/wiki/Klingon_scripts
Qapla’!
Very thin ice separates the headline of this post from my recent absence as a CC writer, but I’m stepping on (in?) it anyhow:
There is nothing such as “artificial intelligence” like the idea being hyped and marketed to the public. It is a zingy, catchy phrase which does a lousy job of describing the thing as it actually exists, but—abetted by popular familiarity with the likes of the conversational computer on Star Trek—it does a terrific job of attracting dollars from funders; investors, and consumers, all of whom grossly overestimate its real capabilities. Probably the best comment, from someone I trust on the matter, is that for the foreseeable future AI is pretty much a 3-year-old with an exceptionally large vocabulary; it can sometimes make a good show of imitating intelligence.
More or less the same goes for “machine learning”.
Clearly AI has not a chance against Daniel I’d say now or in the future…
d’Awww. Thanks!
I think comparing it to a 3-year-old is even overgenerous. “Machine learning” is pretty much entirely about bias-based pattern recognition. Living creatures also have various other types of intelligence, like the abstraction and spatial reasoning involved in creating and using tools, or the development of social relationships. Current machine learning systems not only can’t do those things, there’s no way they ever will, because that’s not what their algorithms do.
When it comes to language, for instance, living creatures that can talk may go through a similar-looking process of imitation and mimickry of language that gains coherency and meaning as the creature learns what the words and gestures mean and apply them creatively. (It’s interesting to see this apply to nonhuman animals like parrots, who may have separate vocabularies of words they understand and use linguistically and words they simply repeat as a form of social noise-making that doesn’t have a specific meaning.) AI output doesn’t ever actually gain meaning or coherency, only the ability to more plausibly imitate it by reference to its body of training material, which is not the same thing.
(Would it be theoretically possible to create an AI that develops animal-like intelligence? Almost certainly. Are current “AI” systems capable of doing that? Not a bit.)
It’s a matter of semantics, in part. Call it something else.
Having read a couple of articles on it since I did this little experiment, it turns out I used it in the wrong way for the most part. It is not a fact machine; that’s better left to Google. It’s a tool, and its abilities lie largely elsewhere.
Here’s an article that gives some insight into how to use it in ways that take advantage of its strengths.
https://oneusefulthing.substack.com/p/four-paths-to-the-revelation
There is no doubt in my mind that it will be used in various ways, especially in certain professions and applications as a tool to reduce workload and offer a way to bounce ideas around.
I’ve met personalities that are very convincing about their take on certain subjects yet have major blindsides in the facts that purportedly underpin their arguments. ChatGTP reminds me of them somewhat. 🙂
…and here’s an article about ChatGTP that gives a shudder.
Having read that article, I spent some time with ChatGTP last night, asking it to improve various chunks of well-written text. It did a bloody hatchet job on all of them, stripping out crucial elements and dumbing down the text to about a 3rd- to 5th-grade level. When I asked it what made its revisions better than the source text, it said its text was clearer and more accurate because it had removed unnecessary details and restructured the text in accordance with the principles of good writing.
Um…not.
If they’d call it something else, something more accurate and less hypey, I’d quit griping about it (or at least I’d gripe less).
But that’s not gonna happen. Too much money to be made conning people into believing in ‘artificial intelligence’ and ‘machine learning’.
Also, there are people who apparently really believe AI systems are on the verge of “waking up” like SkyNet in the Terminator movies.
Artificial Intelligence is only as good as the data it feeds upon and it’s parsing abilities. Alpha Go, IBM Deep Blue, IBM Watson and Tesla Autopilot are but a few examples of the current state of the art. In terms of information recall computers are great at embarrassing even the most intelligent. The seasoning of each persons life experience and how we choose to express them might be our saving grace. As long as we remember synthetics like AI are to supplement and not reign supreme we’ll be fine. CC need not worry for the foreseeable future.
The CC effect also exists online. Yesterday The Economist published this short piece on ChatGPT.
https://www.economist.com/business/2022/12/08/how-good-is-chatgpt
But Paul’s was better.