Will the rise of AI replace us all??

The friendliest place on the web for anyone who enjoys boating.
If you have answers, please help by responding to the unanswered posts.

Chris Leigh-Jones

Veteran Member
Joined
Apr 10, 2021
Messages
99
Vessel Name
Vanguard
Vessel Make
Naval Yachts XPM-78
I don't sleep much as night so in those idle hours tend to play on the internet. I have an interest in AI and how it may affect our world so thought I'd see if AI in the form of ChatGPT could fool the US Coast Guard by passing the 50 or 100T Masters test. I downloaded a bunch of trial tests from a USCG and plugged them in.......

Questions fell in to three groups on a wide variety of subjects.

1. Prescriptive answers where ChatGPT could access the rules and regurgitate the correct one. Like Google but without 20 pages of them. It did pretty well at these. "What is the correct color for a life ring?" would be an example. 10/10

2.Interpretive ones where it needs a bit of care. It did less well at these and quite often missed the mark. "What is the sequence of sound signals approaching two closed bridges in a narrow canal?" is an example.It got it partly correct so a 5/10 result.

3. Outlyers involving interpretation or less prescriptive answers. "what is the annual inspection required for a portable CO2 extinguisher" it basically just made up believable rubbish. So a 0/10 for that. COLREGS tripped it up quite often also.

I then left a chart out on the dining room table with questions and a soft pencil and some dividers. Nothing happened but I waited till 5 am.

So gents, though it did well in some respects and always provided a thorough answer, though I think the demise of the mariner lies beyond our lifetimes. Finally defeated by a pencil and pointy dividers! :thumb:
 
I believe that people are confused by the term A.I. Most A.I. boils down to a computers ability to answer questions through almost instant searching of thousands of data bases. It has been going on for a long time and is getting quicker, more reliable and more far reaching. A simple example is Sirri, or Google. Ask a question and generally an answer can be found in one of the data bases. I have no problem with this.

What I consider to be an invasive A.I. would be a robot (bot) which does not actually search data bases but formulates answers on it's own. this scares me.

pete
 
Resistance is futile, you will be assimilated. :eek:

An auto pilot is AI
Chart plotter too

There are self driving cars. Some put on the brakes if the car ahead of the one you are following slows down.

OP, the challenge you gave AI is the same as asking a room full of university students as an example. After all the new gen is the one programming AI computers.
Interesting topic though.

I usually ponder upon the endless universe and question how there could be an end to it since we know there is always something on the other side. ;)
 
As I understand it the basis of most or all AI is neural networks. In m former life as a global technology manager we tried to use neural networks. We were well versed in statistical modeling but being able to use all of the data in a certain field was a kind of a holy grail. Unfortunately what you quickly learn is all these systems are only as good as the data going in. The problem comes in the reality most data tends to be very clustered. As an example, if treating a disease is done by one treatment in 99% of the cases, then that's all the data will show. It is unlikely to find other treatments since they don't exist in the data or are so poorly represented that it has little influence. The other problem is separation of cause and effect, this is the famous stork and birth rate correlation. Yes there is a correlation but they're unrelated events that just happen to correlate, there is no cause or effect. In theory AI can 'learn' but that means someone or something has to tell it that it is wrong.

The idea reamain entrancing, in my world we hoped to be able to predict which chemical structures would impart certain performance attributes. But we quickly learned that since the realm of known structure-performance relationships was relatively constrained, or clustered around the established chemistries, the abiity of the system to predict significantly beyond this known space quickly deteriorated.

The old saying 'garbage in-garbage out' is absolutely true, and unfortunately despite the fact the world is awash in data, much of it is really not very useful because it is corrupted, confounded with other factors, or covers such a small space that detecting any meainingful new information is impossible. Even a smart computer doesn't know what it doesn't know.
 
For those wondering what all the fuss is about, 60-Minutes did a segment last week on AI that is a worthwhile watch. YouTube is HERE (first segment is AI).

Just prior to the Pandemic, I worked as commercial lead for a R&D group within the Lubricant's division of one of the world's largest Oil & Gas majors. Projects were upstream refining related to blending additives. What was a tremendously tedious and time consuming process was bring streamlined from days/weeks to minutes/seconds. At the time, machine learning was constrained by computing power - Patents were being argued based on theoretical capabilities, not demonstrated ones. Although the machines did indeed 'learn,' it was more pattern recognition --- nothing cognitive as demonstrated in the 60-Minutes segment.

Honestly, I cannot fathom the implications of cognitive AI. I am reminded that around 1900, the then-head of the US Patent Office speculated that everything of consequence had been invented. Before the airplane, phone, TV, and of course the pesky little invention of our lifetime - Internet and SmartPhone. I have wondered what I don't know - the big stuff. There's an old saying that "You cannot tell a tadpole what it's like to be a frog." Looks like I'm about to find out.

Peter
 
In my limited experience with ChatGPT it seems to shoot from the hip when you don’t give it sufficient detail in a question. It will do that without telling you it is doing that. So at this stage, you better have a pretty good idea of what would be a reasonable and plausible answer or you might be sent down a rabbit hole.

Tom
 
In my limited experience with ChatGPT it seems to shoot from the hip when you don’t give it sufficient detail in a question. It will do that without telling you it is doing that. So at this stage, you better have a pretty good idea of what would be a reasonable and plausible answer or you might be sent down a rabbit hole.

Tom

That's what AI is to me.

More than a simple reactionary program like autopilot or car braking...though as those become more sophisticated, they resemble AI.

When a computer program can "shoot from the hip", it sounds normal....like normal people. Especially the part about not telling you. :socool:

I wonder how many TF posters are really AI..... :D
 
Last edited:
To me, it's an exercise in adding more mediocrity to our world. The CG Exam experiment is an excellent example, and just like calling tech support for most companies where you are still talking to real people, but they are frustratingly unhelpful.



Ask a simple questions where the answer is already in the manual sitting in front of the tech, and still packed in the box for the person calling. The tech will get it right 9/10 times. Same with AI. Gutenberg solved this problem 500 years ago. Just unpack it from the box.



Ask a more complicated question for something not covered explicitly in the manual, or that requires a specific understanding of your situation. The tech might get it right 5/10 times. Same with AI.


Ask a complicated question that involves interactions with other products, or an unusual operation. The tech just starts making **** up and telling you it's a power surge, or a "glitch", or to cycle power and try again. Same with AI. Both simply waste your time. AI might waste less since you are unlikely to first wait on hold for 30 minutes.



So let's take the worst of our technical society, and make more of it. Great idea.


I took an AI course in grad school nearly 50 years ago. Other than computers being faster and having access to way more data, nothing seems to have changed. Fast access to huge amounts of data is a fantastic tool, and one well suited to computers. But it still takes a person with brains to sort through it, pick out the junk, and find what really matters. We call it diagnostics, or interpretation, or reasoning, etc. That still takes a person, and it's a dying skill - actually being poisoned.
 
Chat GPT thinks my prop shafts are 1-1/2" - they're 1-3/4" and it still can't tell me the size for the new packing gland I have to order. I'm not too worried, yet.
 
I think we will see extensive of regulation restricting AI development very soon. I'm basing this prediction upon the reported success of AI in passing the bar exams and the self-interest of lawyers who are so prevalent in legislative bodies.

Personally, I'm not afraid of anything that cannot power or fuel itself. I'm pretty good at unplugging things.
 
Greetings,
Mr. G. Hahaha....When has ANY legislation stopped or restricted anything? Sorry but VERY cynical.
 
Greetings,
Mr. G. Hahaha....When has ANY legislation stopped or restricted anything? Sorry but VERY cynical.

Good point, I should have said "attempting to restrict"
 
In my limited experience with ChatGPT it seems to shoot from the hip when you don’t give it sufficient detail in a question. It will do that without telling you it is doing that. So at this stage, you better have a pretty good idea of what would be a reasonable and plausible answer or you might be sent down a rabbit hole.

Tom
Sounds like some people I know. The key is knowing that interaction with that person/entity is likely to produce B.S.
 
AI will never replace me, even if they wanted.
 
Paywall, bummer. I've reached my 'limit of free access' for the NYT, probably from articles on TF!
Do a search for Geoffrey Hinton. There is lots of recent free coverage.
 
I believe that people are confused by the term A.I. Most A.I. boils down to a computers ability to answer questions through almost instant searching of thousands of data bases. It has been going on for a long time and is getting quicker, more reliable and more far reaching. A simple example is Sirri, or Google. Ask a question and generally an answer can be found in one of the data bases. I have no problem with this.

What I consider to be an invasive A.I. would be a robot (bot) which does not actually search data bases but formulates answers on it's own. this scares me.

pete

I started college life at Columbia school of engineering wanting to be a triple E. Then switched direction going for a MD/PhD in neuroscience . Finally switched again to neurology. Through this evolution has watched the explosion of development of computer science in virtually all aspects of medical science. In recent years there’s been a fundamental switch from search engines as described in the quoted post to artificial neuro networks. A.I. unlike search engines are neuro networks. Unlike search engines they can apply logic and complex algorithms . Unlike search engines good AI can think, write new code, create art and creative writing.

AI is revolutionary. It is not a highly developed search engine.
 
It will feed back what it's programmed to feed back, if only from the data its fed.

Search engines are profitable because anyone selling anything pays cash money to have their site rise in the results given.

Its hard to picture a world where the twin problems of programmer bias and profit motive doesn't enter into this.
 
Deep learning and machine learning isn’t based on computers searching large data sets. There’s so called light A.I. which employs a bit more elegant algorithms than the typical search engine but there’s also the real deal which even now approaches a salient , creative thinking entity able to creat new thoughts, code, design. These activities aren’t within the capabilities of even the most sophisticated search engine.
Turing wrote about machines that could mimic human behaviors but the field has gone past that with efforts now focused on actual thinking machines. Please don’t confuse the thinking of the 1950s with current developments.
 
"AI isn’t close to becoming sentient – the real danger lies in how easily we’re prone to anthropomorphize it" here
 
"AI isn’t close to becoming sentient – the real danger lies in how easily we’re prone to anthropomorphize it" here

Excellent link with merit. Currently released A.I. is A.I. lite. As said above current exploration is trying to go past Turning and aimed at sentient machines. Totally agree we’re not close to it yet.
Currently have a very bright daughter in law who is full of factoids. But factoids aren’t knowledge nor insight nor the logical application of that data set to decide the best future action. That’s the jump which is revolutionary. Salience is a future goal.
 
I am not educated in this field,but when I see something like "It's hard to picture a world in which,,,,," I can can only thing back to Star Trek when hand held communication devices were a thing of science fantasy.
 
I am not educated in this field,but when I see something like "It's hard to picture a world in which,,,,," I can can only thing back to Star Trek when hand held communication devices were a thing of science fantasy.

Show me the warp drive and transporter.

Its a simple point. Historically, mediocre technology that is easy to make profitable tends toward swift adoption, while cool technology with no readily identifiable profit model tends to wither.

Facebook went nuts on the Metaverse, even going so far as to rename the company after it. Threw gobs of money at it. With no way to find a profit in it they are backing way off.

Maybe someone has a profit plan in AI. If so, it will take off. I have not seen that yet. If you do, please share so we can all invest and buy our dream yacht from the profits. Sometimes its actually there so I'm not saying it isn't. Just saying I've not seen it yet, but would love to.
 
Ah yes, a AI robot to stand a watch at the helm while I go to sleep. Oh perish the thought. LOL
 
Show me the warp drive and transporter.

Its a simple point. Historically, mediocre technology that is easy to make profitable tends toward swift adoption, while cool technology with no readily identifiable profit model tends to wither.

Facebook went nuts on the Metaverse, even going so far as to rename the company after it. Threw gobs of money at it. With no way to find a profit in it they are backing way off.

Maybe someone has a profit plan in AI. If so, it will take off. I have not seen that yet. If you do, please share so we can all invest and buy our dream yacht from the profits. Sometimes its actually there so I'm not saying it isn't. Just saying I've not seen it yet, but would love to.


There was a time when cars were built by men on an assembly line, but they were largely replaced by robotics. I think AI may be an extension of that. I understand your rationale over profit which makes perfect sense. I was only saying that what seems improbable today may be prevalent tomorrow. When/if the profit angle for AI shows up it will seem obvious to us all.
 
One of the big problems with ChatGPT now is that it's very difficult to know if what it's telling you is accurate. It replies with an authoritative and convincing voice that makes you believe it's truthful however there are many examples of it just making up inaccurate responses.

A good example can be seen on The Sea of Cortez Sailors and Cruisers FB page. Recently a contributor asked ChatGPT for the best times to bash north from Cabo to Ensenada. Very interesting interaction but, alas, not very accurate. It suggested going north early in the summer and talked of a Monsoon effect that's more likely to occur in the SoC and not off the coast. If you use FB, you can find it here.

On the internet today, if you want to be reasonably assured of accuracy you need to go to trusted websites. For example, for medical information you'll probably get more accurate information from the Mayo Clinic site than from some site promoting a diet book. AI solutions will need the same trust estabilished before they should be relied upon. Presumably, someday, you will be able to interact with AIs with validated accuracy and promoted by trusted actors. Until then, beware of what they tell you :)
 
Last edited:
Greetings,
Mr. FWT. Re: Your post #26. https://www.newscientist.com/article/dn13556-10-impossibilities-conquered-by-science/


This is a more "tongue in cheek" site but still true. https://www.theclever.com/20-things-that-used-to-be-science-fiction-but-are-now-a-reality/


Truly autonomous, self aware AI IS coming and I would not be surprised it will be in the next 10 years, if not sooner....much sooner.


You mention profit being the driver of AI development. Unscrupulous developers will most likely be much more interested in power.
 
Last edited:

Latest posts

Back
Top Bottom