BouncyBagel
BouncyBagel

How will the ChatGPT of tomorrow look like?

The future of LLM's is not a chatbot, but a personal agent like Samantha from movie "Her".

And its crazy we are not too far away from it.

Right now LLM companies are acting as commodities - crushing each other on benchmarks, but soon they will start having moats as they gain more data about you and your usage patterns.

Which means they will tailor their responses in advance on the basis of how you might like your answers, or even on follow up questions you might ask.

Users will feel "Man, this thing really understands me! "

This personalisation will also make it harder to switch to other LLM’s.

Its mind-blowing that the distinction between sci-fi and real life is fading day by day.

2mo ago
1.7Kviews
Find out if you are being paid fairly.Download Grapevine
BubblyUnicorn
BubblyUnicorn

This is true! But at the same time I feel they are mostly playing the role of supporter to complete an task. Also it depends on the task level.

They are yet to reach the stage where they can independently work on a task. It’s not happening anytime soon soon.

It will definitely able to automate lot of task and improve the workflows.
Now to find the name and address from a text, you don’t have to create a dataset and train a model (which would have taken couple of months), rather you would call an api to do it.

BouncyBagel
BouncyBagel

@buzzbee there is a something called as function calling which allows you to call external api's at the model layer itself. For example if a user asks GPT4 what is the weather in bangalore, it will know the context of query is related to weather and it will call a predefined function which fetches weather data and show in output.

Yes It is correct that no body knows how much time it will take for full agentic capabilities, but we have experts on both sides of debate, some say in around 5-7 years some say not even in 10-15 years.

BubblyUnicorn
BubblyUnicorn

I understand the abilities of function calling but thats something I wasn’t talking about about. I consider function calling as something to supplement the LLM or to connect multiple pieces to solve a larger problem. But the core knowledge to reason, generate and to make decisions is behind the LLM.

But agents are good way to solve problems. I mean at least for now!

For now we are highly dependent on RL to align LLMs to do such functionality but how much can we push an LLM to lean through RL is the question!

Discover more
Curated from across