You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
1. does information processing become faster? I mean, the availability of "query and learn" information does become greater when comparing a semantic search engine (as represented by the discussed language models) and a traditional indexed keyword search. However, when it comes to the accessibility that is similar to "query, learn and apply in a real task", i.e. practically useful knowledge, I don't see any qualitative leap in this sense. And the limiting factor here is not the search engine at all, but man himself with his dependence on the biochemical nature of thinking, memory and mind. In my opinion, the availability of information is already far beyond the individual's ability to process it (taking into account volumes, noise, false information and other obstacles, such as paid subscriptions to scientific databases).
In order to master something to a practically useful level, it is not enough just to hear an answer from a search engine (no matter how advanced or impressive it may be), you need to make efforts to comprehend the real nuances, develop skills, mastery, etc. I.e. specialists in their fields do not need AI-assistants, as if they have already mastered their field, while beginners are still not able to solve real tasks on the basis of answers from the search engine independently at the level of specialists. As an example, let me remind you of the practice of some beginners to "program" based on answers from stackoverflow. It is of little use, even though the answer is very accessible. And increasing the availability of this ready answer does not make the user more capable of programming. To become a specialist, you should train your mind in solving real problems in your field, and do it yourself.
2. The time has come when specialists in some spheres begin to compete not only among themselves (which is not easy in itself even now), but also with the cheap generator of their product. I think it will not get easier for these specialists. What good will it do them if they can do their work more easily with the help of technology if they get less and less of it?
3. If we think deeply about what world processes are going on here, it may turn out that this acceleration, if it takes place, is not in the interests of the Earth's population at all. However, I think it is better to postpone analysing the subjects of world processes.
1- Is the work withinformation accelerated? - In my opinion, certainly. LLMs connected to the Internet will speed up the work with information.
First, let's look at the scenario of a usual site search. A sequential and sometimes random search process that begins with a preview of the "menu" and approval of the "navigation" plan. Once on the page, we are greeted by related content - mottled UI, links, images, news feeds, comments, adverts. From the first second, the site is purposefully distracting, with the intent to delay. They need us to stay longer. To walk around and loop around. And that's what every site does. And when the answer we find doesn't suit us, we move on to the next one, where they delay us again. The situation is familiar to everyone.
Now, as a counterbalance, let's imagine receiving information on request, without preconditions. We are given answer options with links to sources, without adverts and info rubbish. Websites first give out the requested content and then offer to visit. Isn't that the way it should be from the start?
Now to the issue of education. Indeed, quality education can only be obtained through hard work. One needs textbooks, teachers, tutors, grades, homework, test papers, exams, and so on.... It is possible to reduce the time for such education only at the expense of quality. Therefore, LLMs will not accelerate either school or university education. But, they will speed up the "brains" of students.
To clarify:
In terms of training the mind, practising problem solving, developing discipline, persistence and ambition, LLM has no effect. It all depends on the individual and here I agree with you.
2. Competition of specialists with a cheap generator of their product:
Yes, there will be less work for specialists and they will be competing with the cheap generator of their product (as you said). Agreed. However, this is a side effect of technological progress and doesn't negate the fact that things will get faster.
Problem solving, checking answers, searching for information, finding sources, comparing results, estimating parameters, predicting options, selecting patterns, recognising patterns, classifying data and so on... will speed up. The linguistic interface will help to unite many profile programmes around one complex task, without opening ten applications on the taskbar and sequentially entering parameters in the settings windows. Work will speed up, for sure.
3. Acceleration is not in our best interest:
Speeding up life, is a side effect of civilisation and can only be stopped by stopping technology. If it gets too bad, we can make that choice. Probably...
...let us imagine receiving information on request, without preconditions. We are given answer options with links to sources, without adverts and info rubbish. Websites first provide the requested content and then invite you to visit them. Isn't that how it should be from the beginning?
In the past, you had to finish the job of a search engine that was not good at semantic analysis of a query and gave out raw results of a crude index search (as well as paid-for rendition positions: nothing personal, just business). You had to filter out the valuable from the rubbish yourself. Now the speech interface closes the user's contact to itself, processes the information itself (with varying success, but all problems are solvable), ranks by the degree of payment (why, is business not expected here?). The main thing that accustoms to the scheme "if you need information, the best source is me".
Will users make more requests and get more answers? I don't think so, because the availability of information does not determine the need for it. Which means semantic search engines will simply reduce the time and effort of searching (which isn't usually done for fun, is it?). Meanwhile, habits of filtering and double-checking information from the web (who had them) will be a thing of the past and any AI answer will be taken on faith. Oh what fabulous possibilities this opens up for influencing the minds of consumers, voters, patients, etc.
Problem solving, checking answers, searching for information, finding sources, comparing results, estimating parameters, predicting options, pattern matching, recognising patterns, classifying data and so on... will be accelerated. The linguistic interface will help to unite many profile programmes around one complex task, without opening ten applications on the taskbar and sequentially entering parameters in the settings windows. Work will speed up, for sure.
Since all of this (except for media content) is in one way or another about producing real-world products and services, we shouldn't expect the amount of this work to increase without a dramatic increase in the world's population. And if there is no more work, but the productivity of workers increases, the need for workers will decrease. Actually, this is a current trend in some fields. Competition grows, the cost of labour falls, for the same money the remaining workers will have to "manage more" with the help of these life enhancers.
In general, I think that with these wonderful technologies we will have a lot to philosophise about for a long time to come.
I found a way to send long messages to the AI. Check this out:
Pause Giant AI Experiments!
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
22 March 2023
A written appeal of the scientific community to all AI labs, companies and IT-corporations developing AI systems more powerful than GPT-4 has been published on https://futureoflife.org/ with a request to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
To date, the letter has been signed by 1, 123 people. Among them: Scientists, researchers, teachers, university professors, science award winners, high tech and artificial intelligence workers. (Of the most famous, perhaps, Ilon Musk).
Text of the letter: (original: https://futureoflife.org/open-letter/pause-giant-ai-experiments/)(Translated by ChatGPT)
Artificial intelligence systems with intelligence comparable to human intelligence can pose profound threats to society and humanity, as extensive research[1] shows and major artificial intelligence labs[2] recognise. As stated in the widely supported Asilomar Principles of Artificial Intelligence, advanced artificial intelligence can represent profound changes to the history of life on Earth, and must be planned and managed accordingly. Unfortunately, this level of planning and management is not happening, although over the past few months artificial intelligence labs have been engaged in an uncontrollable race to create and deploy ever more powerful digital minds that no one - not even their creators - can understand, predict, or reliably control.
Modern artificial intelligence systems are becoming capable of competing with human intelligence in common tasks,[3] and we must ask ourselves questions: Should we allow machines to flood our information channels with propaganda and lies? Should we automate all jobs, including those that are satisfying? Should we develop non-human intelligence that may eventually outperform, outmanoeuvre and replace us? Should we risk losing control of our civilisation? These decisions should not be delegated to unelected technology leaders. Powerful artificial intelligence systems should be developed only when we are confident that their effects will be positive and their risks manageable. This confidence should be well founded and increase with the magnitude of the system's potential effects. OpenAI's statement on Artificial General Intelligence states that "at some point it may be important to obtain an independent review before beginning to exercise future systems, and for the most advanced efforts to agree a limit on the rate of growth of computing power used to create new models." We agree.That moment has arrived.
Therefore, we call on all artificial intelligence labs to immediately pause for a period of at least 6 months the training of artificial intelligence systems more powerful than GPT-4. This pause must be public and verifiable, and include all key participants. If such a pause cannot be enacted quickly, governments should step in and impose a moratorium.
AI labs and independent experts should use this pause to collaboratively develop and implement a set of common security protocols for the development and design of advanced AI systems that will be rigorously vetted and monitored by independent external experts. These protocols should ensure that systems following these protocols are secure without question. This does not mean a halt to the development of artificial intelligence in general, but simply a retreat from the dangerous race for ever larger unpredictable black boxes with emerging abilities.
Artificial intelligence research and development should focus on improving the accuracy, security, interpretability, transparency, resilience, compliance, trust and loyalty of existing powerful modern systems.
In parallel, AI developers should work with legislators to dramatically accelerate the development of robust AI control systems. These should include, at a minimum: new and competent regulatory bodies dedicated to AI; control and tracking of highly skilled AI systems and large amounts of computing power; provenance and watermarking systems to help distinguish real from synthetic and track model leaks; a robust audit and certification ecosystem; accountability for AI harms; robust public funding for research into the safety of tech; and a robust public research funding system for the safety of AI systems.
Humanity can enjoy a prosperous future with artificial intelligence. We have succeeded in creating powerful artificial intelligence systems, and we can now enjoy a "summer of artificial intelligence" in which we harvest the fruits, develop these systems for the clear benefit of all, and give society a chance to adapt. Society has been stopped by other technologies with potentially disastrous consequences for society.[5] We can do that here too. Let's enjoy the long summer of artificial intelligence, rather than rushing unprepared into autumn.
Pause Giant AI Experiments!
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
Found out about the letter here:
Imho, American propaganda is hyping AI, appealing to subconscious fears formed in childhood by Fiction and Hollywood. The more I analyse the real effect of language models on society and technological progress, the more I am convinced of this.
In the next posts, we will analyse the real "impact" in more detail, leaving myths and fairy tales aside.
I must say that I have great respect for the scientists who signed the letter. However, I am tired of the hype. I want to see an objective picture of what is happening. What threats does AI carry and what are far-fetched? Where is life and where is the fairy tale? Where is the hype and where is the genuine concern?
Musk has long been scaring the public with scaremongering about AI. If he was the author of the letter alone, I wouldn't have seriously considered it. However, the signature was signed by people known in scientific circles, and this makes us take the issue seriously.
Something suggests that scientists are not afraid of AI, but of people. And therefore, they demand measures to regulate and control development. Just like with nuclear weapons....
Perhaps it is an attempt to slow down competitors in the next technological race. Everyone can pretend to be worried about the future of mankind, but secretly under a blanket in their business interests to continue development and prepare for rapid implementation after the removal of "restrictions".
The possibility of such perfidy certainly exists. However, if we look at the names, regalia and scientific institutions of the signatories (on the original page, under the text of the letter), doubts arise. Perhaps many people signed without thinking. Or as a tribute to colleagues. Some may have fuelled fears in front of others. Some may have aimed to slow others down. Motives probably vary.
It's worth noting, there's an outright call in the letter to hand over control of AI developments to lawmakers, in case the shutdown doesn't happen. This is alarming. Odd that they'd stutter about it.... It would be better if they appealed to the consciousness and responsibility of developers.
Below, I have underlined the key phrases, which frankly hint "where the legs grow from" in this letter.
"...We therefore call on all artificial intelligence labs to immediately pause for a period of at least 6 months the training of artificial intelligence systems more powerful than GPT-4. This pause must be public and verifiable, and include all key participants. If such a pause cannot be enacted quickly, governments must step in and impose a moratorium."
"...AI labs and independent experts should use this pause to collaboratively develop and implement a set of common security protocols for the development and design of advanced AI systems that will be rigorously vetted and monitored by independent external experts. These protocols should ensure that systems following these protocols are secure without question."
"...In parallel, AI developers should work with lawmakers to dramatically accelerate the development of robust AI control systems. These should include, at a minimum: new and competent regulatory bodies dedicated to AI; control and tracking of highly skilled AI systems and large amounts of computing power; provenance and watermarking systems to help distinguish real from synthetic and track model leaks; a robust audit and certification ecosystem; accountability for AI harms; robust public funding for research into the safety of tech