2001: A Space Odyssey is one of the greatest science fiction films ever made…

It introduced the world to HAL 9000, a villain with abilities that were unthinkable when the movie came out in 1968.

HAL was an advanced artificial intelligence (AI) computer system that managed the operations on a spacecraft.

It could hear and speak to the astronauts and scientists on board the ship. It was also omnipresent, meaning it was able to see in every corner, with nowhere to hide from its gaze.

That became a huge problem when two scientists became concerned after HAL began to malfunction.

They even tried escaping HAL by conversing in a separate pod on the ship. But HAL could read lips, so it understood the scientists’ plans to disconnect it.

Ultimately, this cost the scientists their lives.

According to the film, scientists created HAL in 1992. However, it wasn’t until this year that this technology became a reality.

On September 25, OpenAI announced it was giving ChatGPT the ability to “see, hear, and speak.”

ChatGPT is a massively popular chatbot that can create humanlike dialogue.

It can also analyze and respond to photos. As an example, users can now upload a picture of the inside of their refrigerator, and the chatbot can reply with recipes based on its contents.

OpenAI says the ultimate goal is to create an AI capable of processing information in all the ways a human can.

However, it’s important to be aware of the dangers this development poses…

You see, as impressive as ChatGPT is, it suffers from inaccuracies and biases.

That may not seem like a big deal when you’re just asking ChatGPT to write you a poem or create a recipe.

But if we can take any lesson from 2001: A Space Odyssey, it’s that these issues must be addressed before we find ourselves spiraling toward a HAL-level disaster.

In today’s essay, I’ll outline the problems ChatGPT continuously suffers from.

I’ll also introduce you to a new AI project that’s setting out to become a better version of ChatGPT, and how you can profit from it today.

Don’t Trust ChatGPT

ChatGPT is always available to answer your questions – just don’t assume you’ll get the correct answers.

Take software questions, for example.

You’d think a program like ChatGPT would nail software questions, but that’s not the case.

In August, a Purdue University team tested ChatGPT’s accuracy when faced with 517 common programming questions… And 52% of ChatGPT’s answers were flat-out incorrect.

On top of that, 77% of the answers were verbose, meaning that they made simple answers more complex than they needed to be.

The Purdue team found that this had the effect of making questioners assume they were given a correct answer. In other words, ChatGPT’s polite and articulate language made completely wrong answers seem correct.

It was these inaccuracies that led Slack Overflow, the most popular coding website, to ban responses sourced from ChatGPT.

This type of ChatGPT error – confidently giving entirely made-up answers – is known as hallucinations.

And ChatGPT often hallucinates about anything. Names and dates. Medical explanations. The plots of books. Internet addresses.

It even imagines false historical events. For example, ChatGPT said writer James Joyce and revolutionary Vladimir Lenin met in Zurich, Switzerland, in 1916 – an encounter that has never been confirmed to have actually happened.

Microsoft, which has invested tens of billions of dollars in OpenAI, has called these flat-out lies “useful inaccuracies.”

In an explanation that would be laughable if it weren’t so alarming, Microsoft executives said that ChatGPT’s inaccuracies can help encourage folks to double-check the results.

But it’s not just inaccuracies that users have to deal with… ChatGPT has also been found to have political biases.

A January study from the University of Hamburg in Germany found that ChatGPT’s responses have a “pro-environment, left-libertarian orientation.”

Since then, more studies have come out from the U.K. and Brazil that found the chatbot often gives left-leaning responses when prompted to talk about political issues.

OpenAI has come out and insisted that it has designed ChatGPT to avoid political bias.

But the fact is, ChatGPT works by scouring reams of data scraped from the open internet. If there’s bias in the data it finds, it’s going to influence the results.

A Better Solution

Over 100 million people are currently using ChatGPT at least once a month, with just over 20 million of them using it every day.

This means that ChatGPT likely hallucinates, gives wrong answers, and offers politically biased responses millions of times a day.

And now, users will be able to have full-on speaking conversations with the bot and send it photos. Who knows what levels of inaccuracies this will breed?

However, a solution to the errors is set to arrive soon.

There are very few people who have done more to drive the adoption of AI than billionaire Elon Musk.

In 2011, Musk became an early investor in DeepMind… an AI company that was later acquired by Google.

Then in 2015, Musk co-founded OpenAI. However, he left because he didn’t like the direction the other partners were taking the company.

He’s also leading Tesla, an electric vehicle manufacturer that’s heavily invested in AI to teach its cars to self-drive.

Now Musk is launching a new AI project that’s set to be even more ambitious.

This project is on a mission to “understand the true nature of the universe” and create a better-performing chatbot with fewer inaccuracies.

Plus, Musk has decried ChatGPT and current AI technologies as being “woke.” And he recently said, “The danger of training AI to be woke – in other words, lie – is deadly.”

So it’s safe to say this new project will be free of political bias.

Musk has seen extraordinary success with his past ventures such as PayPal, SpaceX, and Tesla.

If this new venture ends up just as successful, then it will revolutionize the AI universe.

But for that project to take off, it needs specialized technology.

And Daily editor Teeka Tiwari recently traveled to the hottest desert on the planet to uncover a company that’s supplying Musk with a key technology for his AI project.

Based on Teeka’s research, if you buy shares now, before Musk makes the official announcement about this key supplier…

He believes you’ll have a chance to make incredible gains from a blue-chip stock.

Teeka just put together an investigative report on this huge opportunity. You can watch it right here.

Regards,

Michael Gross
Analyst, Palm Beach Daily