
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Perhaps the load on the server determines the strength of its intelligence.
This answer was generated in the free version.
Bad news (or maybe good news, who knows?).
I suspected it would happen, the solution was too obvious, but I didn't think it would be so soon.
ChatGPT is being merged with profile software via plugins to extend its capabilities. It will now use third party software to do complex tasks that the language model can't handle.
Like I said, the language model is a grand machine interface whose capabilities are just beginning to unfold.
A revolution is coming.
https:// www.youtube.com/watch?v=ZSfwgKDcGKY&list=PLwsc0Oqqkbu0fJrPWLFf9heFxzJ4R6t4N&index=1&t=622s
There are sets of programmes designed to solve a wide range of practical tasks.
Each of them has, or can open, an API.
Each works according to classical algorithms. For example, programs for calculating physical processes, astronomical programs, meteorological, mathematical, chemical, medical, biochemical and many others. There is no number of them.
The advantage of simple algorithms over language models is obvious. They are able to quickly and accurately produce unambiguous results necessary in the practical work of specialists. Through this "door" LLMs get access to tasks that were previously impossible for them. All that is needed is to select the desired programme, activate the function, send a set of parameters from the prompt to it and return a response to the user.
Here we go. Final stop. Everybody out.
Based on speculative analysis of recent news, I assume that ChatGPT will become an "expert system" after all. Not by itself, of course, but with the help of plugins of profile programmes, debugged and used by experts in professional spheres.
A fancy GUI with dozens of windows and hundreds of parameters is a thing of the past. It requires routine memorisation and navigating confusing settings. Anyone familiar with the Unity and Unreal Engine platforms will know exactly what I'm talking about. It takes effort and hours to perform simple actions.
And then... the language model comes knocking at the door. Do you find it difficult to navigate hundreds of system parameters? - Plug in an LLM and you'll be more productive. Speech interface will reduce stress and attention span. You'll forget about setups in dozens of windows and less typing on the keyboard. Formulate tasks for the AI. It will pick and choose the functionality of your system. Want to work the old-fashioned way? You're welcome. Climb through your windows.
From now on, the list of unsolvable tasks of the language model will dramatically shrink. At some point, the term "language model" will become obsolete and out of use. It will mean "a chatbot not connected to the software environment". Therefore, porting language models to PCs is a dead end.
Conclusion:
A linguistic interface built on the foundation of universal knowledge, interfaced with professional development environments and specialised programs, fully represents the concept of AI. And that's why we see the consolidation of professional and commercial software around the language models of IT giants.
Today, this is the only way they can survive.
Abstracts:
Conclusion:
Today, AI is a global interface, a hub of knowledge and functions of connected software.
Nothing more. No matter how much hype there is around it.
Abstracts:
Here I would clarify that we are talking about plugins. Of course, trained LLMs can be pretty good at generating code that solves small problems. They can write a base for a website, algorithms for a simple EA or script, but no more than that. Their coding is limited, otherwise they would write the functionality themselves to be executed by third-party programs, and nothing would need to be plugged in.
We looked for the limitations of language model technology. They are in front of us. It is logical to assume that the announcement of plugins indicates that the ceiling for developing proprietary functionality has been reached. The model cannot be trained to solve tasks performed by specialised algorithmic programs. Or, it will require excessive consumption of resources.
Thus, the limit of transformer technology is localised. Connecting plug-ins indicates the end of progress and a plateau. Being unable to perform these functions by itself, the model is transformed by its creators into a multiprogramme language interface. In the short term, it looks like a qualitative leap, but in reality, the beginning of slowing down and stopping.
I see this as a sign that the potential of Transformer technology has been used up.Consolidation of functions under a common linguistic interface is not relevant to the development of AI technology, but it is relevant to its expansion.
In other words, the "expansion" of AI and the progress of its technology are different things. Right now there is a massive exploitation of the commercial potential of Transformer technology, while the technology itself has already reached its ceiling. Precisely because the creators are now busy monetising it, the development of the technology has stopped. And it will not resume in the near future. Why should they? We have to make money.)
Peter, the research is pulling for a dissertation)))))
Thanks))))
Default settings, dialogue. I gave the chat room a code to analyse and it gives me this:
Is it broken or what?
has anyone experienced this kind of mental confusion in chat?