AI 2023. Meet ChatGPT. - page 109

 
Aleksey Nikolayev #:

I think there was also a problem with the legalisation of responsibility for decisions.

If there was a problem, it has not disappeared. From the patient's point of view, the responsibility for decision making lies with the doctor. What if the doctor starts shifting the responsibility to OpenAI? They are the ones who introduced GPT-4 everywhere, and it is wrong!

It is clear that doctors will be responsible in any case. And what difference does it make what software they use before making a decision? They can look at a reference book, ask their colleagues, read a forum on the internet, read medical literature, or.... ask the AI. Against this background, AI does not stand out with any particular advantages.

 
Реter Konow #:

If there was a problem, it has not gone away. From the patient's point of view, the responsibility for the decision lies with the doctor. What if the doctor starts shifting the responsibility to OpenAI? They're the ones who implemented GPT-4 everywhere, and it's wrong!

It is clear that doctors will be responsible in any case. And what difference does it make what software they use before making a decision? They can look at a reference book, ask their colleagues, read a forum on the internet, read medical literature, or.... ask the AI. Against this background, AI does not stand out with any particular advantages.

Perhaps the point is that a doctor can avoid responsibility for a bad outcome only if he proves that he acted strictly in accordance with the established rules and algorithms. But any sufficiently strong AI is a black box - it only gives out the answer, but not the way to get it (in the ME this is called non-interpretability).

But perhaps the current AI will also be able to justify its advice to the doctor.

 

Here's what ChatGPT has to say about the introduction of AI into health services:


Talk about the legal issues surrounding the introduction of AI into health care services.

...

The introduction of artificial intelligence in healthcare services has its own peculiarities and raises questions related to legal aspects. Some of the major legal issues related to the implementation of AI in healthcare include the following:

  1. Data privacy: medical data is sensitive information and any use of it must be protected accordingly. The implementation of AI in medicine must be meticulous in ensuring the security and confidentiality of this data.

  2. Regulatory issues: The use of AI in healthcare services may be subject to regulation and scrutiny by medical authorities and other regulatory organisations to ensure that such use meets medical standards and does not result in adverse consequences for patients.

  3. Legal liabilities: the use of AI in medicine may have legal consequences if the technology is misused or leads to negative consequences for patients. In such cases, the question of who is responsible for such consequences may arise.

  4. Ethical issues: The use of AI in healthcare services may raise ethical issues, such as those related to patient autonomy and decision-making, or the fact that AI can be used for a variety of purposes, including testing new medical treatments.

In general, the introduction of AI into healthcare services requires careful evaluation of legal, ethical and regulatory issues. As in other industries, these issues may vary depending on the particular country and jurisdiction where AI in medicine is being applied.

 
Aleksey Nikolayev #:

Perhaps the point is that a doctor can avoid responsibility for a bad outcome only if he can prove that he acted strictly in accordance with the established rules and algorithms. But any sufficiently strong AI is a black box - it only gives the answer, but not the way of getting it (in ME this is called non-interpretability).

But perhaps the current AI will be able to justify its advice to the doctor.

I assume patients would be against using an uninterpretable "black box" as an expert system solving their health issues.

As AI itself said above, there are legal hurdles:

1. Data privacy.

2. Regulation of the use of the new technology.

3. Responsibility for decision making.

4. Ethical issues.

 
If statistically AI will give more correct diagnoses than humans, then reasoning along the lines of "I trust humans more" is tantamount to "this doctor is Hindu, can I have ours?".
 

Tell us, what kind of work can AI do in healthcare services?

...

Artificial intelligence (AI) can do a variety of jobs in healthcare services. Some of them include the following:

  1. Medical image analysis: AI can be used to analyse medical images such as X-rays, magnetic resonance imaging (MRI) and computed tomography (CT) scans to help doctors quickly and accurately determine the presence of diseases or injuries.

  2. Medical data processing: AI can be used to process large amounts of medical data to help doctors and researchers find new treatments, identify risks, and improve patient outcomes.

  3. Decision support: AI can be used to support physicians' decision-making by providing them with treatment information and suggesting best practices based on research and databases.

  4. Diagnosis and prognosis: AI can be used to diagnose diseases and predict outcomes based on analysing medical data and patient history.

  5. New drug development: AI can be used to develop new drugs and treatments based on analysing genetic information and other medical data.

  6. Improving the quality of patient care: AI can be used to improve the quality of patient care, for example, by providing personalised recommendations for treatment and disease management.

  7. Robotics and automation: AI can be used to automate and robotise certain processes in medicine, such as operations and procedures, which can help improve the accuracy and efficiency of these processes.

These are just some examples of how AI can be used in healthcare services. As technology advances and AI algorithms improve, we can expect to see even more possibilities.

...

peterkonow

...

What work in medical services is AI already doing?

...

AI is already doing a variety of work in the medical services industry. Some examples include:

  1. Automatically recognising and analysing medical images such as X-rays, MRIs and CT scans to diagnose various diseases.

  2. Automatically processing and analysing medical data, such as electronic health records, to identify disease and treatment trends and provide recommendations to improve patient outcomes.

  3. Support physician decision-making by providing information on best practices and recommendations based on research and databases.

  4. Diagnosing and predicting diseases using patient history data and analysing genetic information.

  5. Developing new drugs and treatments using analyses of genetic information and other medical data.

  6. Improving the quality of patient care, for example by providing personalised advice on treatment and disease management.

  7. Robotising and automating some processes in medicine, such as operations and procedures.

These are just a few examples of what AI is already doing in healthcare services. As technology advances and AI algorithms improve, we can expect to see even more opportunities in this area.

...

peterkonow

Name the AI models doing the above work in medical services.

...

There are many AI models that do work in the field of medical services. Some of them include:

  1. IBM Watson Health - used to analyse medical data and support doctors' decision making.

  2. Google DeepMind - used to analyse medical images and predict diseases.

  3. Babylon Health - used to diagnose and treat diseases through telemedicine and consultation with doctors.

  4. Zebra Medical Vision - used for automatic recognition and analysis of medical images.

  5. Enlitic - used to automatically recognise and analyse medical images and patient data.

  6. Insilico Medicine - used to develop new drugs and treatments based on analysing genetic information and other medical data.

  7. Medtronic - used to automate procedures in medicine, such as surgical procedures.

These are just a few examples of AI models used in healthcare services. With the advent of new technologies and the development of algorithms, we can expect to see even more AI models designed to perform tasks in medicine.

 
Based on the AI response above, neural networks have been used in medicine for a long time. But, there has never been a talking one. Why it should have a special status is unclear.
 

Tell us who and when introduced these AI models into medicine.

...

AI models for medical applications have been developed and deployed by many companies and organisations over the last few decades.

IBM Watson Health was introduced in 2011 and since then it has been used to analyse medical data and support physician decision making.

Google DeepMind started working in the field of medical science in 2014. In 2016, the company partnered with the National Health Service (NHS) in the UK to use its deep learning algorithms to predict disease progression and analyse medical images.

Babylon Health was founded in 2013 and has since become one of the leaders in telemedicine and physician consultation.

Zebra Medical Vision was founded in 2014 and develops algorithms for automatic recognition and analysis of medical images.

Enlitic was founded in 2014 and has since become known for its technologies for automatic recognition and analysis of medical images and patient data.

Insilico Medicine was founded in 2014 and uses machine learning and AI to develop new drugs and treatments based on analysing genetic information and other medical data.

Medtronic was founded in 1949 and has been developing and manufacturing medical devices for more than 50 years. In recent years, the company has begun integrating machine learning and AI technologies into its products to automate procedures in medicine, such as surgical procedures.

Of course, this is not a comprehensive list of companies working in the field of AI in medicine, but these are some of the most well-known and significant players in the field.

 
Реter Konow #:
Based on the AI response above, neural networks have been used in medicine for a long time. But, there has never been a talking one. Why it should have a special status is not clear.

Obviously, we are talking about an AI that makes a decision to prescribe a particular treatment for a particular patient. Even if the share of responsibility of this AI is less than 100% (but more than 0%).

So AI has been used for a long time, for example, in washing machines, which may well suffice in the same hospitals, but we are not just talking about its formal use.

 
Aleksey Nikolayev #:

Obviously, we are talking about an AI that makes a decision to prescribe a particular treatment for a particular patient. Even if the share of responsibility of this AI is less than 100% (but more than 0%).

AI has long been used, for example, in washing machines, which may well suffice in the same hospitals, but we are not just talking about its formal use.

Someone who is not considered a subject from the legal point of view cannot be held responsible. Logically, he who cannot be held responsible cannot make responsible decisions. If the doctor has the last word, then a talking AI is no different from a silent AI.

It doesn't matter what it says. Not everything has to be listened to.

Reason: